A Vatican-signed document promoting the development of AI in line with ethical principles such as transparency, responsibility, and fairness.
Vatican document on the relationship between human and artificial intelligence, emphasizing dignity, autonomy, and spiritual anthropology.
EU guidelines defining trustworthy AI through seven key principles, including transparency, human oversight, and accountability.
The first global normative framework promoting human rights, sustainable development, and ethics in AI governance.
Principles that support robust, fair, and trustworthy AI, adopted by the G20 and many national governments.
OpenAI's mission is to develop AGI to benefit all of humanity, with a focus on safety and cooperative alignment.
A paper suggesting that LLMs are displaying early signs of general intelligence, sparking debate about AGI emergence.
The first legal framework for AI in the EU, categorizing risks and establishing clear regulatory mechanisms.
United Nations General Assembly resolution emphasizing the importance of safe and trustworthy AI systems for sustainable development.
*Source: UN Digital Library
Comprehensive report by the U.S. National Security Commission on AI, outlining strategies for responsible and competitive AI development in national security.
Independent forecast by AI researchers predicting the emergence of artificial general intelligence (AGI) by the end of 2027. The report outlines technical and social milestones month-by-month, highlighting both transformative potential and existential risks.
An ethical framework for a “good AI society” based on five principles: beneficence, non-maleficence, autonomy, justice, and explicability.
A leading AI pioneer issues a public warning on the risks of AGI and the potential loss of human control over advanced AI systems.