Update wiki/flapping-airplanes.md
c4a510ec551e harrisonqian 2026-04-12 1 file
index 826e066..90aae32 100644
@@ -2,7 +2,8 @@
status: raw
tags:
- ai
-- search
+- research
+- ml
title: flapping airplanes
type: idea
updated: 2026-04-11
@@ -11,4 +12,8 @@ visibility: public
# flapping airplanes
-AI training efficiency research directions.
\ No newline at end of file
+an exploration of AI training efficiency — the name comes from the analogy of early airplane designers who tried to build planes that flap their wings like birds. they were copying the surface behavior (flight) instead of the underlying principle (lift via airfoil). the question being asked: are current deep learning training methods "flapping" — imitating what works without understanding why — and are there fundamentally more efficient approaches waiting to be discovered?
+
+the specific research directions this points at include: sparse training (activate fewer parameters per forward pass), continual/online learning (learn from a stream of experience rather than a fixed dataset), and biologically-inspired learning rules that don't require backprop. the hypothesis is that gradient descent on massive static datasets is like flapping — it works, but it may be a local optimum in the space of learning algorithms, not the global one.
+
+this is more of a research interest than a buildable project. connects to [[cognitive-foom|cognitive foom]] for the recursive self-improvement framing — if you find a more efficient learning algorithm, you've potentially found a lever for accelerating AI capability. also adjacent to [[llm-behavior-improvement|LLM behavior improvement]] and [[llm-physical-intuition|LLM physical intuition]] as research explorations. the philosophical question — "what are we missing about intelligence by copying its surface outputs?" — also connects to [[consciousness-for-students|student consciousness]] and [[intelligence-development|intelligence development]].
\ No newline at end of file