Key Documents

Rome Call for AI Ethics (2020)

A Vatican-signed document promoting the development of AI in line with ethical principles such as transparency, responsibility, and fairness.

Antiqua et nova (2025)

Vatican document on the relationship between human and artificial intelligence, emphasizing dignity, autonomy, and spiritual anthropology.

Ethics Guidelines for Trustworthy AI (2019)

EU guidelines defining trustworthy AI through seven key principles, including transparency, human oversight, and accountability.

UNESCO Ethics Recommendation (2021)

The first global normative framework promoting human rights, sustainable development, and ethics in AI governance.

OECD AI Principles (2019)

Principles that support robust, fair, and trustworthy AI, adopted by the G20 and many national governments.

OpenAI Mission Statement (2023)

OpenAI's mission is to develop AGI to benefit all of humanity, with a focus on safety and cooperative alignment.

DeepMind: Sparks of AGI (2023)

A paper suggesting that LLMs are displaying early signs of general intelligence, sparking debate about AGI emergence.

EU AI Act (2024)

The first legal framework for AI in the EU, categorizing risks and establishing clear regulatory mechanisms.

UN AI Resolution (2024)

United Nations General Assembly resolution emphasizing the importance of safe and trustworthy AI systems for sustainable development.
*Source: UN Digital Library

NSCAI Final Report (2021)

Comprehensive report by the U.S. National Security Commission on AI, outlining strategies for responsible and competitive AI development in national security.

AI 2027 Forecast Report (2025)

Independent forecast by AI researchers predicting the emergence of artificial general intelligence (AGI) by the end of 2027. The report outlines technical and social milestones month-by-month, highlighting both transformative potential and existential risks.

Floridi – AI4People (2018)

An ethical framework for a “good AI society” based on five principles: beneficence, non-maleficence, autonomy, justice, and explicability.

Geoffrey Hinton Statement (2023)

A leading AI pioneer issues a public warning on the risks of AGI and the potential loss of human control over advanced AI systems.