18-11-06 Common Mistakes in Thinking (Why it's hard to think)

Category: Idea Lists (Upon Request)

Read the original document

<!-- gdoc-inlined -->


1. What you mean when you say a word is usually to activate some but not all of the words associations. And the distinctions required to disentangle those associations are innumerable. 2. Multiple objectives to a set of thoughts, say

  1. to be communicable (that is communicate the right message),
  2. to be useful in a practical sense,
  3. to be truthful,
  4. to cooperate,
    1. ex., signalling tribal loyalty through belief alignment
  5. to be efficiently represented (even though brevity requires abandoning complexity that may be necessary in some situations but not others)
  6. Etc. whose conflict involves tradeoffs where you may be willing to make a number of mistakes so that what you’re saying can be easily understood. But then you start thinking with that representation yourself.
  7. There are no words which fail to conflate unlike objects that describe what you’re trying to describe
  8. Continuous not Binary (or Discrete)
    1. Collapsing a space into a single object
    2. Assuming something fundamentally probabilistic is binary
  9. Using the wrong axes to perform an evaluation (ex., how similar are these objects? Is very easy to evaluate in a way that doesn’t respect the goal of the comparison
    1. Worse, the sense that the evaluation is conditional on the purpose of the evaluation may be lost. The notion of similarity will be taken to be objective, true for all possible goals.
  10. Incomplete decomposition, missing important sub-categories that makes the model feel clearly broken (even if close the the principal components for many goals are captured)
    1. OMG, we need a prediction focused PCA which balances the goals of variance maximization and maintaining predictive capacity
      1. I guess that this is what LULZ is supposed to be
    2. Ex., Intelligence -> analytical, creative and practical intelligence (Sternberg)
    3. There’s this immense harm in thinking that knowledge representations need to be literally true. Sternberg’s model may be the most useful for many tasks, efficiently making tradeoffs that are necessary tradeoffs to be made for any model. Brining a strict standard of truth to it and evaluating it on that basis fails to respect the reality that there are multiple objectives to these models.
    4. We end up in a world with mathematics and data are all that fail to die to purity, a world where making conceptual progress is impossible because all concepts can be destroyed by our standards.
      1. Yet every day, we live by these concepts, we think with these concepts. And we’re neutering the process by which we improve them
  11. In so many cases, things aren’t right or wrong but are more right or more wrong. Spheres are reasonable approximations to the space of weights SGD can get to in n steps, even though the reality is much more complicated.

Source: Original Google Doc

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?