index 768e161..af81d24 100644
@@ -15,7 +15,7 @@ a vector is just a list of numbers — but that framing undersells it. a vector
- a person's test scores are a vector: (SAT: 1520, GPA: 3.9, AP count: 12)
- a word's meaning is a vector in semantic space: "king" = (0.2, -0.5, 0.8, ...)
-once something is a vector, you can do vector operations on it. you can compute distances (how similar are these two things?), averages (what's the "center" of this group?), and projections (how much of this is in that direction?). at its core, this is [[immediate/arithmetic-everywhere|arithmetic]] lifted into higher dimensions — addition, subtraction, scaling — but now operating on objects far richer than single numbers.
+once something is a vector, you can do vector operations on it. you can compute distances (how similar are these two things?), averages (what's the "center" of this group?), and projections (how much of this is in that direction?). at its core, this is [[arithmetic-everywhere|arithmetic]] lifted into higher dimensions — addition, subtraction, scaling — but now operating on objects far richer than single numbers.
the semantic space example from my essay: word embeddings represent meanings as vectors in 300-dimensional space. the dot product of two word vectors measures their semantic similarity. and vector arithmetic captures analogies:
@@ -53,7 +53,7 @@ projecting a vector onto a direction extracts "how much of this is in that direc
but projection is conceptually richer than that:
- in statistics, linear regression is projection: you're projecting data onto the best-fit line
-- in signal processing, fourier analysis is projection: you're projecting a signal onto each frequency component (see [engineering](/wiki/stem/engineering-and-modeling))
+- in signal processing, fourier analysis is projection: you're projecting a signal onto each frequency component (see [[engineering-and-modeling|engineering]])
- in machine learning, dimensionality reduction is projection: compressing high-dimensional data into a lower-dimensional space while preserving the most important structure
any time you're decomposing something into components, you're projecting.
@@ -62,12 +62,12 @@ any time you're decomposing something into components, you're projecting.
the rank of a matrix tells you how many independent dimensions the transformation actually uses. a 1000×1000 matrix might have rank 3 — meaning all that apparent complexity lives in just 3 dimensions.
-this is profound for data analysis. real-world data is almost always lower-dimensional than it appears. a dataset with 100 features might effectively live in a 5-dimensional subspace. finding that subspace (via PCA, SVD, or other methods) is one of the most powerful techniques in data science and [[stem/computer-science|computer science]] more broadly.
+this is profound for data analysis. real-world data is almost always lower-dimensional than it appears. a dataset with 100 features might effectively live in a 5-dimensional subspace. finding that subspace (via PCA, SVD, or other methods) is one of the most powerful techniques in data science and [[computer-science|computer science]] more broadly.
-the concept generalizes: when something seems complex, ask "what's the effective dimensionality?" how many independent factors actually drive this system? that's the rank. this move — from surface complexity to underlying simplicity — is a form of [[structural/abstraction-as-power|abstraction]]: ignoring the noise to find the signal.
+the concept generalizes: when something seems complex, ask "what's the effective dimensionality?" how many independent factors actually drive this system? that's the rank. this move — from surface complexity to underlying simplicity — is a form of [[abstraction-as-power|abstraction]]: ignoring the noise to find the signal.
## the deep point
linear algebra is the mathematics of structure in spaces. once you start seeing things as vectors in spaces — meanings, states, data points, possibilities — you gain access to a powerful toolkit: distances, transformations, decompositions, and the deep structural insights that eigenvalues and rank provide.
-[[structural/the-organizational-lens|the organizational lens]] applied here asks: "what are the dimensions? what are the transformations? what stays stable? what's the effective dimensionality?" these questions apply to far more than matrices.
\ No newline at end of file
+[[the-organizational-lens|the organizational lens]] applied here asks: "what are the dimensions? what are the transformations? what stays stable? what's the effective dimensionality?" these questions apply to far more than matrices.
\ No newline at end of file