Ask HN: Is AI converging on human-like cognition?

6 points by seansh 2 days ago

It seems every day I see another aspect of the human mind implemented in a very primitive form of course.

For example, chain of thought (CoT) can be thought of as the beginning of the internal monologue that you and I have and DeepSeek's CoT reads very similar to how one would think about a problem. Did nature also figure out the same solution? and was consciousness born out of something similar to CoT?

Another example I was reading about today is Mixture of Experts (MoE) where we have a router that dispatches tokens to subnetworks that specialize in certain domains. If this technique were to develop further would that lead to something like human hub personalities?

I may very well be finding patterns where none exist and these may be nothing more than metaphors at best. I have a very crude understanding of AI that's why I'm asking here hoping to get an expert's opinion.

usgroup 5 hours ago

To my understanding, an LLM -- and similar models -- have a Markov chain equivalent.

There is an old argument from philosophy that any mechanical interpretation of mind has no need for consciousness. Or conversely, that consciousness is not needed explain any mechanistic aspect of mind.

Yet, consciousness -- sentience -- is our primary differentiator as humans.

From my perspective, we are making strides in processing natural language. We have made the startling discovery that language encodes a lot about thought patterns of the humans producing the text, and we now have machines which can effectively learn those patterns.

Yet, sentience remains no less a mystery.

BriggyDwiggs42 2 days ago

Chain of thought is very weird and very impressive. When you watch it work, it looks like a series of flash-frozen human voices being recruited onto the page. We know, of course, that these are the ones that happen to produce the responses we want at the end of the chain, but there isn’t imo an underlying meaning being communicated through the words. It’s a Chinese room responding to itself, built to produce desirable paths. Humans mostly don’t work that way. The subconscious can talk to itself without words, and it’s only the most blunt, underspecified ideas that get put into internal monologue.

  • drakenot a day ago

    People have proposed doing CoT not with output tokens, but to do it in the latent space of the model.

mettamage a day ago

> Did nature also figure out the same solution? and was consciousness born out of something similar to CoT?

I don't think it's that. I think what we're doing is that we are "conditioning" / "teaching" computers to be useful to us, so we use models that we find useful and instill it (sub-consciously) on computers. At least, this happens to some extent. Sometimes we see a completely foreign model that a computer applies well and then we use that.

I don't think one can infer much about nature when it comes to computers. Not in this way at least. What is happening much more is that we're seeing a (part of) a reflection of ourselves.

iExploder 2 days ago

Imho as a non expert, there is nothing having an internal monologue or thinking about anything.

These are prediction models that mimic patterns in data they have been taught and the behaviour they had reinforced.

One could argue encyclopedic knowledge and routine work is deprecated to a degree.

Taking into account that most human work is busy work and there are a lot of inefficiencies due to replicated efforts, in short term I'm starting to get worried about job security...

In medium term I expect there will be a development boom of essentially everyone creating everything imaginable, a lot of that will probably be useful...

In long term once embodiment is perfected and AI can effectively learn on the go in realtime, we will be truly screwed, but that has still too many challenges like energy source- batteries, computing power and algorithm efficiency.

  • mettamage a day ago

    > These are prediction models that mimic patterns in data they have been taught and the behaviour they had reinforced.

    Yep this.

    > In long term once embodiment is perfected and AI can effectively learn on the go in realtime, we will be truly screwed, but that has still too many challenges like energy source- batteries, computing power and algorithm efficiency.

    We might just integrate with it to keep up.

    • iExploder 17 hours ago

      > We might just integrate with it to keep up.

      the skeptic in me thinks this path will be reserved only for a select few, as capabilities of integrated humans expand exponentially, so will the need for more energy and resources

      I dont think there's enough to go around unless we unlock space travel...

theothertimcook 2 days ago

Disclosure: Not an expert, barely functioning human.

No, humans are capable of a level of stupidity well beyond the theoretical potential of computers.