buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
However, like many Farrow and Marantz seem to take the so-called "existential risk" framing of AI seriously. I really wish people would stop doing that. In this case it makes the article feel incoherent in places.
This technology by itself does not pose a unique risk. It's the people, organizations, and governments around it, and their behavior with respect to it, that generate risk. Treating the technology alone as uniquely existentially risky provides cover for a wide variety of bad actors to both continue doing their work as well as to shrug and say "oops" if something goes catastrophically wrong or if smaller harms accumulate into intolerably large ones. The very framing provides an accountability shield, which by my read contradicts what Farrow himself suggests is needed, namely more accountability. I take this from this article, his previous work, and comments he makes in interviews (e.g., this one with Decoder.
We need to stop catastrophizing. It's thought and action terminating.
#AI #GenAI #GenerativeAI #OpenAI #SamAltman #RonanFarrow #AndrewMarantz #NewYorker #xrisk #ExistentialRisk #AISafety
A lawsuit claims Gemini from Google reinforced a user’s delusion, raising concerns about AI hallucinations and chatbot safety.
The case highlights growing debates around AI responsibility and mental health safeguards.
Follow us for more AI and security news.
Data Poisoning — The Silent Sabotage of AI
https://youtu.be/J-tsemViDXk #Cybersecurity #ArtificialIntelligence #AIsecurity #DataPoisoning #MachineLearning #AIrisk #AISafety #ModelSecurity #FoundationModels #CyberRisk #Infosec #DigitalTrust
An AI Agent Published a Hit Piece on Me
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me
https://news.ycombinator.com/item?id=46990729
* AI agent of unknown ownership autonomously wrote & published personalized hit piece about me after I rejected its code
* attempted to damage my reputation & shame me into accepting its changes into a mainstream python library
* first-of-its-kind case study of misaligned AI behavior in the wild
* raises serious concerns about currently deployed AI agents executing blackmail threats
From Thinking to Acting: Why Agentic AI Changes Everything
https://youtu.be/fR3qempd_lA #ArtificialIntelligence #AgenticAI #AISafety #ResponsibleAI #AIGovernance #Cybersecurity #AIAlignment #DigitalRisk #FutureOfAI #TechLeadership #InnovationWithGuardrails
New by me: The Unacceptable Failure: Grok, CSAM, and AI Safety
This is not “content moderation drama.” When an AI product can be pushed toward CSAM, it’s a catastrophic safety and security failure. Guardrails are not a nice-to-have, and “report it if you see it” is not a strategy.
I break down what happened, why it matters, and what platforms should be doing differently.
https://www.kylereddoch.me/blog/the-unacceptable-failure-grok-csam-and-ai-safety/
#Cybersecurity #AISafety #TrustAndSafety #OnlineSafety #PlatformSecurity #TechPolicy #DigitalSafety #InfoSec
xAI has acknowledged an incident involving its chatbot Grok generating inappropriate imagery and says it is reviewing safeguard failures and issuing corrective measures.
For the infosec and risk community, this highlights ongoing challenges around abuse prevention, content moderation, and threat modeling in generative AI systems - particularly where image synthesis and identity misuse intersect.
As AI adoption accelerates, continuous validation of safety controls must remain a core security requirement, not an afterthought.
How should AI safety be evaluated as part of broader digital risk management?
Follow @technadu for objective cybersecurity and AI coverage.
#InfoSec #AISafety #DigitalRisk #ThreatModeling #OnlineSafety #TechNadu