Co-founder and board member at Singularity Group, and a director of engineering at Google, responds to the Future of Life Institute's recent letter, "Pause Giant AI Experiments: An Open Letter."
"This criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree. There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn't sign, because I believe we can address the signers' safety concerns in a more tailored way that doesn't compromise these vital lines of research.
"I participated in the Asilomar AI Principles Conference in 2017 and was actively involved in the creation of guidelines to create artificial intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI's profound advantages to health and productivity while avoiding the real perils."
About Singularity Group
Singularity Group is an innovation company that believes technology and entrepreneurship can solve the world's greatest challenges.
We transform the way people and organizations think about exponential technology and the future, and enable them to create and accelerate initiatives that will deliver business value and positively impact people and the planet.
An exponential tech pioneer since 2008, Singularity has grown to become an innovation and transformation hub for over 250,000 CEOs, entrepreneurs, investors, policymakers and individuals in startups, corporations, NGO's, governments and academia. With 58 chapters across 30 countries (and growing) and a community of leaders from around the world, the company has helped launch over 5,000 impact innovation initiatives and its alumni have started more than 200 companies.
For more information, visit https://su.org.
SOURCE Singularity Group