My take is that AI is very one-dimensional (within its many dimensions). For instance, I might close my eyes and imagine an image of a tree structure, or a hash table, or a list-of-trees, or whatever else; then I might imagine grabbing and moving the pieces around, expanding or compressing them like a magician; my brain connects sight and sound, or texture, to an algorithm. However people think about problems is grounded in how we perceive the world in its infinite complexity.
Another example: saying out loud the colors red, blue, yellow, purple, orange, green—each color creates a feeling that goes beyond its physical properties into the emotions and experiences. AI image-generation might know the binary arrangement of an RGBA image but actually, it has NO IDEA what it is to experience colour. No idea how to use the experience of colour to teach a peer of an algorithm. It regurgitates a binary representation.
At some point we’ll get there though—no doubt. It would be foolish to say never! For those who want to get there before everyone else probably should focus on the organoids—because most powerful things come from some Faustian monstrosity.
This is really funny to read as someone who CANNOT imagine anything more complex than the most simple shape like a circle.
Do you actually see a tree with nodes that you can rearrange and have the nodes retain their contents and such?
Haha—yeah, for me the approach is always visual. I have to draw a picture to really wrap my brain around things! Other people I’d imagine have their own human, non-AI way to organize a problem space. :)
I have been drawing all my life and studied traditional animation though, so it’s probably a little bit of nature and nurture.