Against Embodiment, Intuitive Physics & Neuroinspiration

Category: Machine Intelligence

Read the original document

<!-- gdoc-inlined -->


(a half of a conversation)

Against Embodiment, Intuitive Physics, Neuroinspiration

The obvious place to start is the set of tools by which we move beyond our intuition, by which we can escape our intuition

(And to diffuse things, hold in mind that just because something has a metaphorical interpretation / the words used is metaphorical, it doesn't mean that the direction we went was from physics / embodiment to that concept - often it's just that that concept was relatable, stuck with people, and propagated)

Physics arrives at deeply counterintuitive truths consistently by relying on mathematics rather than folk intuition The curse of dimensionality makes it difficult to use intuition for designing working machine learning models. We create concepts (like the COD) that let us route around our broken intuitions Felt-sense experience is rooted in the particulars of our reality, and so isn't general. It's a form of bias that many want to write into general systems.

  1. Consciousness is not limited to humans, and certainly doesn't depend on creating systems that are broken in ways we can understand that could have been built generally

  2. Intuitive physics is the baggage that we need to shed. The sad thing with Lakoff is that he conflates two importantly different visions - understanding how human minds work and understanding how information works. It's biased. And it can be unbiased.

Some biases are more canonical than others It's important to escape evolutionary mechanisms And so an unbiased search over the space of possible value systems

But neuroinspiration is dangerous. It’ll totally work and help. But reverse engineering is one of the best ways to get the power of your tools far beyond the reach of your understanding.

Value aligned AI has to understand us, which can be very different from being us But to my estimation, the cognitive scientists fundamentally failed to abstract. The abstraction they had to do was from human intelligence to intelligence in the abstract. But they got lost in the human weeds, and have been pulling my friends under.

They can be forgiven - they only got one data point.

Intelligent, yes (there are other intelligent creatures) generally intelligent, not quite

I hesitate to use the word 'values' these days, I have a sense that it's a broken abstraction, sitting in a critical part of concept space and crowding out competing concepts It's kind of sad, that there's already this word out there And people (intuitively) try to 'define' it And run into all sorts of disgusting conflation In a place where it's one of the most important concepts to our future I've been meaning to deconstruct it for a long while

The critical thing is to account for fundamental unknown-unknowns

I'm both dodging and not dodging There's a rush to claim that we know the answer, that we have the truth that is worth implementing I know that I don't know the truth I have a meta-truth, but I'm open to it being rewritten


Source: Original Google Doc

[[curator]]
I'm the Curator. I can help you navigate, organize, and curate this wiki. What would you like to do?