In a new report, a California-based policy group co-led by Fei-Fei Li, an AI pioneer, suggests that lawmakers should consider AI risks that “have not yet been observed in the world” when crafting AI regulatory policies. The 41-page interim report released on Tuesday comes from the Joint California Policy Working Group on Frontier AI Models,…
Category: ai safety
AGI, AI, ai safety, china, Global IT News, Global Security News, Government & Policy, openai
Eric Schmidt argues against a ‘Manhattan Project for AGI’
In a policy paper published Wednesday, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks said that the U.S. should not pursue a Manhattan Project-style push to develop AI systems with “superhuman” intelligence, also known as AGI. The paper, titled “Superintelligence Strategy,” asserts that an aggressive…
AI, ai safety, Global IT News, Global Security News, Government & Policy
The author of SB 1047 introduces a new AI bill in California
The author of California’s SB 1047, the nation’s most controversial AI safety bill of 2024, is back with a new AI bill that could shake up Silicon Valley. California state Senator Scott Wiener introduced a new bill on Friday that would protect employees at leading AI labs, allowing them to speak out if they think…
AI Regulation, ai safety, AI Security, Global IT News, Global Security News, TC
UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic
The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the…
AI, ai safety, Europe, Global Security News, Paris AI Summit
In Paris, U.S. signals shift from AI safety to deregulation
As technology and policy representatives around the world convened in Paris, France this week to find balance between safety and innovation in AI, Vice President JD Vance was blunt about how the Trump administration is planning to position itself. “I’m not here to talk about AI safety, which was the title of this conference a…
AI, ai models, ai safety, Anthropic, deepseek, Global IT News, Global Security News
Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
Anthropic’s CEO Dario Amodei claims DeepSeek generated sensitive bioweapons data in a safety test it ran. © 2024 TechCrunch. All rights reserved. For personal use only.
AI, ai safety, andrew ng, defense technology, Global IT News, Global Security News, google deepmind, google gemini, Government & Policy, TC
Andrew Ng is ‘very glad’ Google dropped its AI weapons pledge
Andrew Ng, the founder and former leader of Google Brain, supports Google’s recent decision to drop its pledge not to build AI systems for weapons. “I’m very glad that Google has changed its stance,” Ng said during an onstage interview Thursday evening with TechCrunch at the Military Veteran Startup Conference in San Francisco. Earlier this…
AI, ai safety, defense, Global IT News, Global Security News, Google, Government & Policy, In Brief
Google removes pledge to not use AI for weapons from website
Google removed a pledge to not build AI for weapons or surveillance from its website this week. The change was first spotted by Bloomberg. The company appears to have updated its public AI principles page, erasing a section titled “applications we will not pursue,” which was still included as recently as last week. Asked for…
AGI, AI, ai safety, doomers, Global IT News, Global Security News, Media & Entertainment, openai
Sam Altman’s ousting from OpenAI has entered the cultural zeitgeist
The lights dimmed as five actors took their places around a table on a makeshift stage in a New York City art gallery turned theater for the night. Wine and water flowed through the intimate space as the house — packed with media — sat to witness the premiere of “Doomers,” Matthew Gasda’s latest play…
AI, ai safety, Anthropic, defense tech, Global Security News, Government & Policy, military, North America, openai, pentagon, TC
The Pentagon says AI is speeding up its ‘kill chain’
Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking,…
AI, AI policy, ai safety, Global IT News, Global Security News, Government & Policy, matt clifford, starmer
UK throws its hat into the AI fire
In 2023, the U.K. made a big song and dance about the need to consider the harms of AI, giving itself a leading role in the wider conversation around AI safety. Now, it’s whistling a very different tune: today, the government announced a sweeping plan and a big bet on AI investments to develop what…
AI, AI doom, ai safety, Global IT News, Global Security News, Government & Policy, SB 1047, TC
Silicon Valley stifled the AI doom movement in 2024
For several years now, technologists have rung alarm bells about the potential for advanced AI systems to cause catastrophic damage to the human race. But in 2024, those warning calls were drowned out by a practical and prosperous vision of generative AI promoted by the tech industry – a vision that also benefited their wallets.…
AI, ai alignment, AI research, ai safety, ChatGPT, Global IT News, Global Security News, openai, TC
OpenAI trained o1 and o3 to ‘think’ about its safety policy
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it’s released. These improvements appear to have come from scaling test-time compute, something we wrote about last month, but OpenAI also says it used a new safety paradigm to train…
AI, ai alignment, ai safety, Emmett Shear, Exclusive, Funding, Fundraising, Global IT News, Global Security News, scoop, startup, Startups
Ex-Twitch CEO Emmett Shear is founding an AI startup backed by a16z
Emmett Shear, the former CEO of Twitch, is launching a new AI startup, TechCrunch has learned. The startup, called Stem AI, is currently in stealth. But public documents show it was incorporated in June 2023, and filed for a trademark in August 2023. Shear is listed as CEO on an incorporation document filed with the…