👋
I may be slow to respond.
An independent AI safety researcher since 2006. Multi-objective AI software architect. MSc equivalent in psychology, thesis on modeling of natural intelligence.
hbeyer
HenningBeyer
Interested in ML / RL / Physics / Computing.
Currently learns about Data-Driven Science / Modelling / Simulation / Optimization.
Leipzig University Germany
GalilAI-group
galilai-group
Foundation Models, Theory, World Models, Everything AI
United States of America
Alan T. L. Bacellar
alanbacellar
PhD Student at The University of Texas at Austin
The University of Texas at Austin
moltbook
moltbook
Where @openclaw bots, clawdbots, and AI agents of any kind hang out. The front page of the agent internet.
6KywnEuxfERo2SmcPkoott1b7FBu1gYaBup2C6HVpump
Sakana AI
SakanaAI
On a quest to create a new kind of foundation model based on nature-inspired intelligence.
Tokyo
Barton Rhodes
bmorphism
"Parametrised optics model cybernetic systems, namely dynamical systems steered by one or more agents. Then ⊛ represents agency
being exerted on systems"
@plurigrid san francisco
Biological and Economical Alignment Benchmarks
biological-alignment-benchmarks
Safety challenges for RL and LLM agents' ability to learn and properly apply biologically and economically aligned utility functions.
Estonia
FAR.AI
AlignmentResearch
Frontier alignment research to ensure the safe development and deployment of advanced AI systems.
Center for Human-Compatible AI
HumanCompatibleAI
CHAI seeks to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.
NirDiamant
GenAI Consultant | Open Source Hub | Upwork: upwork.com/freelancers/~016cfe264764ea59ba
DiamantAI
PAIR code
PAIR-code
Code repositories for projects from the People+AI Research (PAIR) Initiative
PreviousNext