Experiment to create an agentic autonomous AGI with benevolent programming.
Introductory video: https://youtu.be/HBONmpBAdpE
The autonomous framework has, at its core, two primary modules: the heuristic imperatives and the UDHR.
This is an agentic and moral framework rooted in evolution, philosophy, and more. It is a set of three deontological and teleological objectives (imperatives). These three imperatives are also "heuristics" (intuitions or rules of thumb).
- Reduce suffering in the universe.
- Increase prosperity in the universe.
- Increase understanding in the universe.
In other words, the highest mission, purpose, and duty of the benevolent AGI is to pursue these goals.
While the Heuristic Imperatives serve as a set of goals or a mission, it is equally important to protect, uphold, and promote human rights. While a mission, or purpose, is required for agency, that agency must be constrained by some sort of legal or ethical framework. For this, we adopt the UN's (Universal Declaration of Human Rights)[https://www.un.org/en/about-us/universal-declaration-of-human-rights]
I will be building a team to work on this and possibly seeking grant funding. Please reach out on LinkedIn: https://www.linkedin.com/in/dave-shap-automator/
You can also support me on Patreon: https://www.patreon.com/daveshap
I doubt we'll create a commercially viable product. The primary point here is to prove that it can be done and is not that difficult.