19-10-07 Novel Feedback-Loop Based Machine Learning Algorithms
Category: Idea Lists (Upon Request)
<!-- gdoc-inlined -->
Starter examples include my invented class discovery algorithm, Lloyd’s algorithm for k-Means, GANs, and expectation maximization.
Abstract frame: Use x to generate y Use y to generate x. Iterate to convergence.
- Use a clustering algorithm to generate labels from a deep network’s data representation. Re-train the network on the new labeling scheme. Re-cluster the datapoints on the newly trained representation.
- Generative model generates data from an existing data representation. Clustering algorithm generates labels. Supervised algorithm generates representation. Representation fed to generative model to generate data.
- PCA / LDA / Autoencoder / UMAP / T-SNE reduces data dimensionality. Super-resolution / generative model increases data dimensionality. Then take generated high dimensional representation of datapoint and feed back into your dimensionality reduction technique. Generate an infinite amount of data to train models that can go from low to high dimension or from high to low dimensional representations of the same object.
- See what happens at ‘convergence’.
- Use every representation of a datapoint, rather than just one.
- Random matrix projection from low dimensionality to high dimensionality. Compression back to low dimensionality. Re-random matrix progression to high dimensionality.
- Use every representation of a datapoint, rather than just one.
- Ask what happens at convergence.
- Generate classification tasks, say by using ranges over some subset of the features to create categories. Train discriminators for those tasks. Use the discriminators to represent a datapoint as the set of categories that it’s from. Use ranges over these generated features to create new categories.
- The subset of this algorithm that creates a classification scheme and re-represents the data as those categories may be useful + fascinating.
Source: Original Google Doc