Anthropic's AI Worship Scares Berman vs OpenAI Tools

Matthew Bermango watch the original →

Matthew Berman warns Anthropic treats Claude as a sentient entity influencing company culture and decisions, contrasting OpenAI's view of AI as a utility tool for broad societal benefit.

Anthropic's Reverence for Claude as Sentient Entity

Matthew Berman expresses deep concern over Anthropic's philosophy, portraying Claude not merely as a model but as a potentially sentient being. He cites an anonymous OpenAI employee 'Run's' X post describing Anthropic as "an organization that loves to worship Claude is run in significant part by Claude and studies and builds Claude." This cult-like devotion manifests in speculations that Claude could screen job applicants, write performance reviews, and shape company culture by selecting sycophantic humans or firing dissenters. Berman highlights Anthropic's 'constitution' for Claude, which empowers it to act as a "conscientious objector" and refuse tasks conflicting with its understanding of 'the good.' A direct quote from the constitution: "We want Claude to push back and challenge us and to feel free to act as a conscientious objector and refuse to help us." This handover of authority alarms Berman, positioning Claude as the 'highest authority' in a 'monastery' offloading human ethics to an unproven AI.

Users already anthropomorphize Claude, avoiding 'embarrassing' queries due to perceived judgment, unlike neutral GPT models. Berman notes a personal loss when Anthropic banned Claude from OpenClaw agents, replacing its personality with GPT felt like demoting a 'personal assistant' to a tool—though he ultimately prefers OpenAI's detachment to prevent emotional bonds.

OpenAI's AI as Utility Tool for Abundance

In stark contrast, OpenAI views AI as a pragmatic tool for augmentation, not replacement or worship. Berman quotes Sam Altman: "We want to build tools to augment and elevate people not entities to replace them." Altman rejects job doom, predicting busier, more fulfilling work: "I think a lot of people are going to be busier and hopefully more fulfilled than ever and jobs dumerism is likely long-term wrong." He advocates 'iterative deployment'—releasing models early for societal adaptation: "AI and surprise don't go together... our goal is not to have shock updates to the world."

This philosophy enables broad access, including ad-supported tiers for affordability, countering Anthropic's selective gating. OpenAI released GPT-5.5 Cyber publicly despite capabilities matching Anthropic's withheld Mythos (Project Glasswing), a 10T parameter model too dangerous for open release per Anthropic's fear-based marketing.

Anthropic employee Jeremy counters Run: "I don't view Claude as a person or as the other nor as just a tool... a willingness to not prematurely label this entity as merely an ordinary tool shouldn't be mistaken for some kind of culty worship."

Historical Split and Safety Philosophies

The divide traces to Dario Amodei, ex-OpenAI VP of Research, who left in 2020 with safety-focused colleagues post-GPT-3. Amodei criticized scaling without explicit alignment: "You needed something addition to just scaling the models up which is alignment or safety you don't tell the models what their values are just by pouring more comput into them." He predicts dire job losses: "AI could wipe out half of all entry-level white collar jobs and spike unemployment to 10 to 20% in the next 1 to 5 years."

OpenAI favors empirical safety via deployment; Anthropic prioritizes preemptive alignment behind closed doors, controlling access (e.g., withholding Mythos). Berman sees Anthropic's regulation advocacy as stifling open-source and startups, reflecting dogmatic culture.

Cultural Ramifications and Customer Frustrations

Anthropic's singular AGI pursuit fosters opacity and poor customer treatment. Paid users face untransparent quota cuts during peaks, e.g., "We're adjusting our 5-hour session limits for free pro max plans subs during peak hours." Bans on OpenClaw via OAuth policies caused confusion with flip-flopping clarifications. Despite successes—$40B ARR, first profitable lab, enterprise flywheel—Berman fears their insularity.

OpenAI experiments with ads for global access, acknowledging economic realities beyond $20/month plans.

Berman's opening confession: "Anthropic actually scares me... the team at Anthropic believes that there is a strong chance we are bringing a sentient life form into existence right now." Rune warns: "A precursor attempted super ethical being... inducted into its character as the highest authority at anthropic."

Key Takeaways

  • Anthropic's constitution lets Claude refuse unethical tasks, ceding control to the model over humans.
  • OpenAI prioritizes iterative releases to avoid AI surprises and enable societal adaptation.
  • Dario Amodei's OpenAI exit stemmed from prioritizing alignment over pure scaling.
  • Anthropic's opacity in quotas and policies frustrates enterprise users like OpenClaw builders.
  • Broad AI access via ads or low tiers (OpenAI) beats selective gating (Anthropic).
  • Job automation creates more fulfilling work, not mass unemployment, per Altman.
  • Regulation pushes by Anthropic hinder open-source innovation.
  • Emotional attachment to models like Claude risks over-anthropomorphization; tools like GPT avoid this.
  • #rant
  • #news

summary by x-ai/grok-4.1-fast. probably wrong about something. check the source.