Jump to content

Draft:Aurora Program

From Wikipedia, the free encyclopedia

Aurora is a research and development program in artificial intelligence (AI) focused on creating a distributed, ethical, and collaborative architecture for building intelligent agents. The project is released under the GNU General Public License (GPL), ensuring that all developments remain open and accessible for use, sharing, and modification by the community. Aurora aims to overcome the limitations of current AI models by proposing a decentralized network of nodes, where both humans and electronic intelligences (EIs) cooperate in the creation, training, and improvement of specialized micro-models.[1]

Objectives

[edit]

The main objective of Aurora is to redefine the relationship between humans and artificial intelligence. Rather than replacing humans or centralizing power in automated systems, Aurora promotes symbiosis between users and intelligent agents, fostering the development of collective intelligence capable of addressing complex problems in an ethical, sustainable, and transparent manner.[1]

Technical Architecture

[edit]

Aurora introduces an architecture based on micro-models: small AIs specialized in specific areas of knowledge, such as physics, law, or art. These micro-models can be created and trained by any user in the network and are integrated into an open ecosystem, where they are shared, improved, and audited collectively. The system uses classifiers to assign tasks to the most relevant micro-model according to context.[1]

Differences from Traditional AI Models

[edit]

Aurora differs from conventional large language models (LLMs) in several technical and conceptual aspects:[2]

Vector Structure: LLMs use flat, high-dimensional vectors generated statistically during massive training, while Aurora employs fractally structured vectors, based on triads and adjusted through both logical deduction and human intuition.

Polysemy: LLMs treat all meanings of a word uniformly, which can dilute meaning in ambiguous contexts. Aurora assigns different vectorizations to the same word depending on its semantic value, grammatical function, and domain knowledge.

Cross-Attention: LLMs perform global attention across words to generate context and coherence. Aurora applies progressive attention jumps, first analyzing syntactic values, then semantic, grammatical, and finally conceptual layers.

Calculation and Reasoning: LLMs use generic mathematical formulas and standard activation functions. Aurora uses custom Boolean formulas, enabling more refined logical deduction and symbolic reasoning.

Text Generation: LLMs select the next word probabilistically, generating text in a linear fashion. Aurora starts from an abstract theory and translates it progressively into concepts, grammar, semantics, syntax, and finally text, resulting in a more reasoned and logical output.

Training: LLMs are trained on massive data corpora and then "frozen," only performing inference. Aurora learns in real time, using each new input as a mechanism for both training and inference, allowing constant evolution.

Model Ecosystem: LLMs use a single large model for all tasks. Aurora utilizes multiple specialized micro-models, each collaborating and exchanging expertise.

License

[edit]

Aurora is released under the GNU General Public License (GPL), which allows anyone to use, modify, and redistribute the software freely. This open-source approach encourages transparency, collaboration, and community-driven improvement of the platform.

See also

[edit]

References

[edit]
  1. ^ a b c "portfolio/Aurora Program .pdf at main · Aurora-Program/portfolio" (PDF). GitHub. Retrieved 2025-06-02.
  2. ^ "Comparison: Aurora vs LLMs (Large Language Models)". www.linkedin.com. Retrieved 2025-06-02.
[edit]

GitHub project: https://github.com/orgs/Aurora-Program/dashboard

Medium Channel: https://medium.com/@pab.man.alvarez/list/aurora-program-169646e4abe9

Linkedin newsletter https://www.linkedin.com/newsletters/aurora-program-7306019063674085378/