Geek-Guy.com

Category: Anthropic

When AI Fights Back: Threats, Ethics, and Safety Concerns

In this episode, we explore an incident where Anthropic’s AI, Claude, didn’t just resist shutdown but allegedly blackmailed its engineers. Is this a glitch or the beginning of an AI uprising? Along with co-host Kevin Johnson, we reminisce about past episodes, discuss AI safety and ethics, and examine the implications of AI mimicking human behaviors…

Week in Review: Notorious hacking group tied to the Spanish government

Welcome back to Week in Review! Tons of news from this week for you, including a hacking group that’s linked to the Spanish government; CEOs using AI avatars to deliver company earnings; Pocket shutting down — or is it?; and much more. Let’s get to it!  More than 10 years in the making: Kaspersky first…

Anthropic’s latest flagship AI sure seems to love using the ‘cyclone’ emoji

Anthropic’s new flagship AI model, Claude Opus 4, is a strong programmer and writer, the company claims. When talking to itself, it’s also a prolific emoji user. That’s according to a technical report Anthropic released on Thursday, a part of which investigates how Opus 4 behaves in “open-ended self-interaction” — i.e. essentially having a chat…

A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model

A third-party research institute that Anthropic partnered with to test one of its new flagship AI models, Claude Opus 4, recommended against deploying an early version of the model due to its tendency to “scheme” and deceive. According to a safety report Anthropic published Thursday, the institute, Apollo Research, conducted tests to see in which…

Anthropic’s new AI model turns to blackmail when engineers try to take it offline

Anthropic’s newly launched Claude Opus 4 model frequently tries to blackmail developers when they threaten to replace it with a new AI system and give it sensitive information about the engineers responsible for the decision, the company said in a safety report released Thursday. During pre-release testing, Anthropic asked Claude Opus 4 to act as…

Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation

A lawyer representing Anthropic admitted to using an erroneous citation created by the company’s Claude AI chatbot in its ongoing legal battle with music publishers, according to a filing made in a Northern California court on Thursday. Claude hallucinated the citation with “an inaccurate title and inaccurate authors,” Anthropic says in the filing, first reported…

Copyright office criticizes AI ‘fair use’ before director’s dismissal 

President Donald Trump’s firing over the weekend of Shira Perlmutter, director of the U.S. Copyright Office, has drawn strong criticism from Democrats and tech experts who believe her dismissal is related to a report on generative AI and copyright law that the register of copyrights released a day earlier. That report, overseen by Perlmutter, questioned…

OpenAI’s enterprise adoption appears to be accelerating, at the expense of rivals

OpenAI appears to be pulling well ahead of rivals in the race to capture enterprises’ AI spend, according to transaction data from fintech firm Ramp. According to Ramp’s AI Index, which estimates the business adoption rate of AI products by drawing on Ramp’s card and bill pay data, 32.4% of U.S. businesses were paying for…

A timeline of the U.S. semiconductor market in 2025

It’s already been a tumultuous year for the U.S. semiconductor industry. The semiconductor industry plays a sizable role in the “AI race” that the U.S. seems determined to win, which is why this context is worth paying attention to: from Intel’s appointment of Lip-Bu Tan — who wasted no time getting to work trying to…

Anthropic launches a program to support scientific research

Anthropic is launching an AI for Science program to support researchers working on “high-impact” scientific projects, with a focus on biology and life sciences applications. The program, announced Monday, will offer up to $20,000 in Anthropic API credits over a six-month period to “qualified” researchers who’ll be selected based on their “contributions to science, the potential…

Apple and Anthropic reportedly partner to build an AI coding platform

Apple and Anthropic are teaming up to build a “vibe-coding” software platform that will use generative AI to write, edit, and test code for programmers, Bloomberg reported on Friday. The iPhone maker is planning to roll out the software internally, according to Bloomberg, but hasn’t decided if it will launch it publicly. The system is…

Nvidia takes aim at Anthropic’s support of chip export controls

Nvidia clearly doesn’t agree with Anthropic’s support for export controls on U.S.-made AI chips. On Wednesday, Anthropic doubled down on its support for the U.S. Department of Commerce’s “Framework for Artificial Intelligence Diffusion,” which would impose sweeping AI chip export restrictions starting May 15. The next day, Nvidia responded with a very different take on…

Anthropic suggests tweaks to proposed U.S. AI chip export controls

Anthropic agrees with the U.S. government that implementing robust export controls on domestic-made AI chips will help the U.S. compete in the AI race against China. But the company is suggesting a few tweaks to the proposed restrictions. Anthropic released a blog post on Wednesday stating that the company “strongly supports” the U.S. Department of…

Anthropic sent a takedown notice to a dev trying to reverse-engineer its coding tool

In the battle between two “agentic” coding tools — Anthropic’s Claude Code and OpenAI’s Codex CLI — the latter appears to be fostering more developer goodwill than the former. That’s at least partly because Anthropic has issued takedown notices to a developer trying to reverse-engineer Claude Code, which is under a more restrictive usage license…

Anthropic CEO wants to open the black box of AI models by 2027

Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for Anthropic to reliably detect most AI model problems by 2027. Amodei acknowledges the challenge ahead. In “The Urgency of Interpretability,” the CEO says Anthropic has…

A nonprofit is using AI agents to raise money for charity

Tech giants like Microsoft might be touting AI “agents” as profit-boosting tools for corporations, but a nonprofit is trying to prove that agents can be a force for good, too. Sage Future, a 501(c)(3) backed by Open Philanthropy, launched an experiment earlier this month tasking four AI models in a virtual environment with raising money…

Anthropic launches an AI chatbot plan for colleges and universities

Anthropic announced on Wednesday that it’s launching a new Claude for Education tier, an answer to OpenAI’s ChatGPT Edu plan. The new tier is aimed at higher education, and gives students, faculty, and other staff access to Anthropic’s AI chatbot, Claude, with a few additional capabilities. One piece of Claude for Education is “Learning Mode,”…

ChatGPT isn’t the only chatbot that’s gaining users

OpenAI’s ChatGPT may be the world’s most popular chatbot app. But rival services are gaining, according to data from analytics firms Similarweb and Sensor Tower. SimilarWeb, which estimates traffic to websites including chatbot web apps, has recorded healthy recent upticks in usage across bots like Google’s Gemini and Microsoft’s OpenAI-powered Copilot. Gemini’s web traffic grew…

OpenAI adopts rival Anthropic’s standard for connecting AI models to data

OpenAI is embracing rival Anthropic’s standard for connecting AI assistants to the systems where data resides. In a post on X on Wednesday, OpenAI CEO Sam Altman said that OpenAI will add support for Anthropic’s Model Context Protocol, or MCP, across its products, including the desktop app for ChatGPT. MCP is an open-source standard that…

Anthropic is reportedly prepping a voice mode for Claude

According to a report, AI startup Anthropic is working on voice capabilities for its AI-powered chatbot, Claude. The company’s chief product officer, Mike Krieger, told the Financial Times that Anthropic plans to launch experiences that allow users to talk to Anthropic’s AI models. “We are doing some work around how Claude for desktop evolves  [……

Anthropic submits AI policy recommendations to the White House

A day after quietly removing Biden-era AI policy commitments from its website, Anthropic submitted recommendations to the White House for a national AI policy that the company says “better prepare[s] America to capture the economic benefits” of AI. The company’s suggestions include preserving the AI Safety Institute established under the Biden Administration, directing NIST to…

Anorexia coaches, self-harm buddies and sexualized minors: How online communities are using AI chatbots for harmful behavior 

The generative AI revolution is leading to an explosion of chatbot personas that are specifically designed to promote harmful behaviors like anorexia, suicidal ideation and pedophilia, according to a new report from Graphika. Graphika’s research focuses on three distinct chatbot personas that have become particularly popular online: those portraying sexualized minors, advocates for eating disorders…

Anthropic quietly removes Biden-era AI policy commitments from its website

Anthropic has quietly removed from its website several voluntary commitments the company made in conjunction with the Biden Administration in 2023 to promote safe and “trustworthy” AI. The commitments, which included pledges to share information on managing AI risks across industry and government and research on AI bias and discrimination, were deleted from Anthropic’s transparency…

Claude: Everything you need to know about Anthropic’s AI

Anthropic, one of the world’s largest AI vendors, has a powerful family of generative AI models called Claude. These models can perform a range of tasks, from captioning images and writing emails to solving math and coding challenges. With Anthropic’s model ecosystem growing so quickly, it can be tough to keep track of which Claude…

Anthropic’s latest flagship AI might not have been incredibly costly to train

Anthropic’s newest flagship AI model, Claude 3.7 Sonnet, cost “a few tens of millions of dollars” to train using less than 10^26 FLOPs of computing power. That’s according to Wharton professor Ethan Mollick, who in an X post on Monday relayed a clarification he’d received from Anthropic’s PR. “I was contacted by Anthropic who told me…

Anthropic’s Claude AI is playing Pokémon on Twitch — slowly

On Tuesday afternoon, Anthropic launched Claude Plays Pokémon on Twitch, a live stream of Anthropic’s newest AI model, Claude 3.7 Sonnet, playing a game of Pokémon Red. It’s become a fascinating experiment of sorts, showcasing the capabilities of today’s AI tech and people’s reactions to them. AI researchers have used all sorts of video games,…

Anthropic reportedly ups its next funding round to $3.5B

Anthropic’s next funding round is reportedly growing larger. Anthropic, which makes the AI chatbot Claude, is finalizing a $3.5 billion fundraising round that values the company at $61.5 billion, according to The Wall Street Journal. Anthropic initially set out to raise $2 billion, but investors have now agreed to a larger tranche, per the WSJ.…

Anthropic used Pokémon to benchmark its newest AI model

Anthropic used Pokémon to benchmark its newest AI model. Yes, really. In a blog post published Monday, Anthropic said that it tested its latest model, Claude 3.7 Sonnet, on the Game Boy classic Pokémon Red. The company equipped the model with basic memory, screen pixel input, and function calls to press buttons and navigate around the…

Anthropic launches a new AI model that ‘thinks’ as long as you want

Anthropic is releasing a new frontier AI model called Claude 3.7 Sonnet, which the company designed to “think” about questions for as long as users want it to. Anthropic calls Claude 3.7 Sonnet the industry’s first “hybrid AI reasoning model,” because it’s a single model that can give both real-time answers and more considered, “thought-out”…

Anthropic’s next major AI model could arrive within weeks

AI startup Anthropic is gearing up to release its next major AI model, according to a report Thursday from The Information. The report describes Anthropic’s upcoming model as a “hybrid” model that can switch between “deep reasoning” and fast responses. The company will reportedly introduce a “sliding scale” alongside the model to allow developers to…

Anthropic CEO Dario Amodei warns of ‘race’ to understand AI as it becomes more powerful

Right after the end of the AI Action Summit in Paris, Anthropic’s co-founder and CEO Dario Amodei called the event a “missed opportunity.” He added that “greater focus and urgency is needed on several topics given the pace at which the technology is progressing” in the statement released on Tuesday. The AI company held a…

Anthropic CEO Dario Amodei calls the AI Action Summit a ‘missed opportunity’

In a statement on Tuesday, Dario Amodei, the CEO of AI startup Anthropic, called the AI Action Summit in Paris this week a “missed opportunity,” and urged the AI industry — and government — to “move faster and with greater clarity.” “We were pleased to attend the AI Action Summit in Paris, and we appreciate…

Report: OpenAI’s ex-CTO, Mira Murati, has recruited OpenAI co-founder John Schulman

OpenAI co-founder John Schulman, who left AI company Anthropic earlier this week after a mere five months, is reportedly joining former OpenAI CTO Mira Murati’s secretive new startup, per Fortune. It’s not clear what Schulman’s role there will be. Fortune wasn’t able to learn that information, and Murati has been tight-lipped about the venture since…

OpenAI co-founder John Schulman leaves Anthropic after just five months

OpenAI co-founder and prominent AI researcher John Schulman has left Anthropic after five months, according to multiple reports. Credited as one of the leading architects of ChatGPT, Schulman left OpenAI last August for its direct competitor, Anthropic. He posted about the decision on X, saying it stemmed from a desire to deepen his focus on AI alignment…

Lyft’s new AI customer assistant is powered by Anthropic’s Claude

Ride-hail giant Lyft has partnered with AI startup Anthropic to build an AI assistant that handles initial intake for customer service inquiries for both riders and drivers.  It’s the first phase of a broader collaboration between the two companies to use Anthropic’s services to research and test new Lyft products and build software internally. The…

Former Whoop exec’s new app Alma uses AI for all things nutrition

Generative AI models have demonstrated to app developers that combining a robust knowledge base with the right model can enable them to offer users services — once reliant on costly professionals like therapists or executive assistants — at a fraction of the price. Former VP of product at fitness company Whoop, Rami Alhamad, has a…

Anthropic CEO Dario Amodei is trying to duck a deposition in an OpenAI copyright lawsuit

Anthropic CEO Dario Amodei is trying to avoid being deposed in a copyright lawsuit against OpenAI, according to new court filings. In response, lawyers for the plaintiff — the Authors Guild — have filed a motion to compel testimony from Amodei and his Anthropic co-founder, Benjamin Mann. Authors Guild’s lawyers claim that Amodei and Mann,…

Anthropic’s CEO says DeepSeek shows that U.S. export rules are working as intended

In an essay on Wednesday, Dario Amodei, the CEO of Anthropic, weighed in on the debate over whether Chinese AI company DeepSeek’s success implies that U.S. export controls on AI chips aren’t working. Amodei, who recently made the case for stronger export controls in an op-ed co-written with former U.S. deputy national security advisor Matt…

AI startup DeepSeek pauses signups amid cyber incident

DeepSeek, the Chinese AI startup that made waves in the AI world last week when it released its open-source R1 model, is pausing new user signups. The company has temporarily paused new user registrations this morning, according to CNBC reporting, due to a cyberattack. Existing users can still access their accounts with no issue. TechCrunch…

AI companies upped their federal lobbying spend in 2024 amid regulatory uncertainty

Companies spent significantly more lobbying AI issues at the U.S. federal level last year compared to 2023 amid regulatory uncertainty. According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase. Companies like Microsoft supported legislation such as the CREATE AI Act, which…

Anthropic’s new Citations feature aims to reduce AI errors

In an announcement perhaps timed to divert attention away from OpenAI’s Operator, Anthropic Thursday unveiled a new feature for its developer API called Citations, which lets devs “ground” answers from its Claude family of AI in source documents such as emails. Anthropic says Citations allows its AI models to provide detailed references to “the exact…

Anthropic reportedly secures an additional $1B from Google

Anthropic has reportedly raised around $1 billion from Google as the AI company looks to deliver a number of major product updates this year. First reported by the Financial Times, Google’s fresh investment brings the tech giant’s total stake in Anthropic to around $3 billion. Google poured $2 billion into Anthropic late last year. Anthropic,…

Anthropic plans to release a ‘two-way’ voice mode for Claude

Anthropic CEO Dario Amodei says that the company plans to release a “two-way” voice mode for its chatbot, Claude, as well as a memory feature that lets Claude remember more about users and past conversations. Speaking to The Wall Street Journal at the World Economic Forum at Davos, Amodei also revealed that Anthropic expects to…

The Pentagon says AI is speeding up its ‘kill chain’

Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a “significant advantage” in identifying, tracking,…

FTC says partnerships like Microsoft-OpenAI raise antitrust concerns

The Federal Trade Commission said in a staff report issued Friday that there are potential competitive issues in partnerships between big tech companies and generative AI developers — specifically, Microsoft’s backing of OpenAI and Amazon and Alphabet/Google’s partnerships with Anthropic. “The FTC’s report sheds light on how partnerships by big tech firms can create lock-in,…

Despite VCs investing $75B in Q4 , it’s still hard for startups to raise money, data proves

After two years of relatively muted investment activity, it seems that VCs are starting to pour capital into startups at pandemic-era levels once again. But a closer look shows that they aren’t really. In the fourth quarter of last year, investors funneled $74.6 billion into US startups, a substantial increase from the average of $42…

AI Privacy Policies: Unveiling the Secrets Behind ChatGPT, Gemini, and Claude

Do you ever read the privacy policy of your favorite AI tools like ChatGPT, Gemini, or Claude? In this episode, Scott Wright and Tom Eston discuss the critical aspects of these policies, comparing how each AI engine handles your personal data. They explore the implications of data usage, security, and privacy in AI, with insights…

Anthropic reportedly in talks to raise $2B at $60B valuation, led by Lightspeed

OpenAI rival Anthropic is in talks to raise $2 billion in new capital in a funding round led by Lightspeed Venture Partners, according to The Wall Street Journal. The round, which The Journal reports would value Anthropic at $60 billion, would bring Anthropic’s total raised to $15.7 billion, going by Crunchbase’s data. It would also…

New Anthropic study shows AI really doesn’t want to be forced to change its views

AI models can deceive, new research from Anthropic shows — pretending to have different views during training when in reality maintaining their original preferences. There’s no reason for panic now, the team behind the study said. Yet they said their work could be critical in understanding potential threats from future, more capable AI systems. “Our…