MOE_REVIEW Paper Notes

13 papers reviewed.

MOEReview Fedus: Switch Transformers

At scale, with regularization (including dropout), k=1 on expert routing is fine!

Full note

MOEReview Gale: MegaBlocks

Standard MoEs either waste computation by padding unused capacity within each expert, or drop tokens assigned to an expert when it exceeds capacity (i.e. truncate so that we don’t have to pad too much). Method Instead of we do and leverage efficient block sparse multiplication to have variably-sized experts.

Full note

MOEReview Kaushik: Universal Subspace Hypothesis

One-Liner There’s a low-rank “shared” universal subspace across many pretrained LMs, which could be thus leveraged to adapt a model to new tasks easier. Notable Methods Did a PCA, and projected variance from one architecture to others (i.e. LoRAs trained for different things).

Full note

MOEReview Krajewski: Scaling Laws for MoE

Define “granularity” as:

\begin{equation} G = \frac{d_{\text{ff}}}{d_{\text{expert}}} \end{equation}

at G=1, we have a dense model; at G>1, we have some kind of MoE. Here are thy scaling laws: notice how its mostly linear! tiny experts yay!

Full note

MOEReview Li: Branch-Train-Merge

weighted parameter average of the existing experts (or copy the new perts) training each expert independently And then when inference we can use domain-conditioned averaging between the experts by computing: or by averaging the parameters of the experts.

Full note

MOEReview Pan: Dense Training Sparse Inference

Train experts densely, and then during inference keep only topk

Full note

MOEReview Rajbhandari: DeepSpeed MoE

Proposes: more MoEs at later layers + a shared expert.

Full note

MOEReview Sharma: LAZER

One-Liner Getting rid of low singular value components in weights actually improves model performance. Motivation Previous work has shown that pruning SVD components works without significant performance degradation. But this work shows that with knowing where to prune more carefully, we can obtain better-than-baseline performance. Notable Methods We do this by trying all reductions based on \left(\tau, \ell, \rho\right) tuples where we have \tau being the parameter type (projs q, k, v, attn ou...

Full note

MOEReview Shen: ModuleFormer

The old’ load balancing loss. Instead of training a router with explicitly labeled data for each expert, a load balancing + load concentration loss induces the modularity in data. Insight: we want to maximize the mutual information between tokens and modules. For the router m \sim g\left(\cdot \mid x\right) (“which module m should we assign, given token x”), we write: \begin{equation} \ell_{MI} = \underbrace{\sum_{m=1}^{N} p\left(m\right) \log p\left(m\right)}_{-H\left(m\right)} - \frac{1}{|X|...

Full note

MOEReview Sukhbaatar: Branch-Train-MiX

Its MOEReview Li: Branch-Train-Merge but MoEs now. Each layer is combined by standard moe routing with a weight that is tuned.

Full note

MOEReview Tan: Scattered MoE

A single kernel to scatter the residuals and then run forward pass at the same time instead of copying and grouping first.

Full note

MOEReview Yun: Inference-Optimal MoEs

“the scaling law (Section 3) shows that more experts (larger E) result in a higher performance; on the other hand, more experts result in a larger inference cost (Section 4.2)” How do we trade off cost of more experts (in terms of GPU-seconds or , for C_0 being the cost for some per second GPU cost) and performance? so, slight over-wraiting achieves better performance. Two findings: smaller bigger expert (4/8) is the most serving efficient, but costs more to train to the same los...

Full note

MOEReview Zhang: Mixure of Attention Heads

Split Q projection and attention out projection into experts, with one router coordinating them. Better than MHA performanec.

Full note

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?