index 90aae32..650d092 100644
@@ -16,4 +16,4 @@ an exploration of AI training efficiency — the name comes from the analogy of
the specific research directions this points at include: sparse training (activate fewer parameters per forward pass), continual/online learning (learn from a stream of experience rather than a fixed dataset), and biologically-inspired learning rules that don't require backprop. the hypothesis is that gradient descent on massive static datasets is like flapping — it works, but it may be a local optimum in the space of learning algorithms, not the global one.
-this is more of a research interest than a buildable project. connects to [[cognitive-foom|cognitive foom]] for the recursive self-improvement framing — if you find a more efficient learning algorithm, you've potentially found a lever for accelerating AI capability. also adjacent to [[llm-behavior-improvement|LLM behavior improvement]] and [[llm-physical-intuition|LLM physical intuition]] as research explorations. the philosophical question — "what are we missing about intelligence by copying its surface outputs?" — also connects to [[consciousness-for-students|student consciousness]] and [[intelligence-development|intelligence development]].
\ No newline at end of file
+related: [[llm-behavior-improvement|LLM behavior improvement]], [[llm-physical-intuition|LLM physical intuition]], [[consciousness-for-students|student consciousness]], [[intelligence-development|intelligence development]]
\ No newline at end of file