Posts

First published: Mar 17, 2023, 2:21am EDT
Last edited: Mar 27, 2023, 11:02am EDT

Just update rules between neurons

A while ago, I came across this statement about Bing’s chatbot.

It’s a Large Language Model, so it doesn’t have any opinions or knowledge at all. It’s just able to string together sentences based on its training data.

I jokingly responded “me too,” but to be honest, I think the above statement is altogether way too frequent of a perspective. I think this line of thinking is insidious and incurious and we should all work to eliminate this kind of discourse. It’s true that these things are stochastic parrots but until we conclusively prove we aren’t also stochastic parrots we should probably avoid the word “just.” Let me explain.

Historically, there have been two main schools of thought about human intelligence. There’s the materialistic or physicalist view which holds that human consciousness is a pure product of physical laws, able to be replicated by machine once we understand how to build it. Then there’s the mind-body dualistic view, or non-materialistic view, which holds that human consciousness is a product of something more, endowed with a soul maybe? and not replicable with our current understanding of physical laws.

If the second school of thought is true and we learn there is something more, then by definition physical laws will expand to include it, and the first school of thought becomes right again. So Descartes was wrong about this and it’s the first one. QED.

My contributions to 17th century philosophy aside, I think eventually, with the right set of insights we maybe don’t have yet, it will be possible to make a machine consciousness. Is silicon really so different from carbon? Maybe we don’t have it yet but maybe we will soon.

Humans think very highly of their own consciousness, but is this warranted? What makes human consciousness? We’ve dissected brains and there really aren’t many things in there. We have the DNA sequence that instructs a small set of protein machinery to build neurons and so on. It’s amazing and fascinating, but, again, assuming a materialistic view, consciousness physically can’t be more than some simple rules between neurons. I think we’re all going to be gobsmacked when we get it. At the end of the day, what I think will be most mind blowing when we figure out how to replicate the key features of the brain is not how complicated they are but how simple they are.

There’s two ways we might discover this. One is from the neurology side: human in. Finding out what makes a consciousness by painstakingly and irrefutably mapping what human brains do has the benefit of being persuasive. It will do the best (but probably still not great) job at convincing the holdouts that human consciousness is the result of relative simplicity.

And then there’s the second, faster, more likely to show up, sooner way: machine out - we just build it first without understanding. This is Pandora’s box, and we’re well on our way.

You might be living under a rock at this point if you have not have heard about ChatGPT (or large language models), but we are absolutely in Wright brothers or Oppenheimer territory right now, I’m not sure which. I want to underscore this as much as I can: we’re witnessing what might be the single reason history remembers computers as they exist today. I’m trying to avoid discussing things that might take focus or energy away from perhaps the most pressing other issue of our time, but if there was a chance of something else destroying the world more than climate change, this is probably it.

Right before graduating grad school in 2010, I took a course on natural language processing. This was one of the last years before the real advent of deep learning and boy what timing. NLP at the time was concerned with morphology, parts of speech, word stemming, trying to understand grammar; it was a mess. I came away from that class convinced we’d never do it and myself having greater trouble understanding English than before.

To say that the progress since then has been shocking is an understatement. In just a few years, deep learning figured out far more on language recognition than I thought NLP research would see in my lifetime. If you want to see what the tombstone for a field looks like, check out the NLP wikipedia page. I think the entire NLTK library has been superseded by a couple lines of PyTorch or TensorFlow or whatever. And since then? Do you even know for sure ChatGPT didn’t write this blog post? Seriously, holy shit at that even being a question. Depending on the questioner, we have absolutely passed the Turing test.

Okay, but is ChatGPT conscious?

No, says everyone. They’re just autocomplete systems trained on our own stories. They’re just statistical models of the entire internet.

I mean, I agree that large language models are surprisingly straightforward and not a lot of code. It’s actually kind of infuriating in some sense that the 1980s multilayer perceptron model with back propagation was so close but all it needed was massive scale. The pieces were invented and then we had another AI winter while we waited for more transistors. Gradient descent is not that hard. It’s all so simple, so there’s no room for consciousness in a system that is essentially picking words out of a Bayesian network’s probability bag, right? Right?

To be honest, I’m not convinced I’m not also only doing that. I mean, how do you know for sure your consciousness isn’t basically that? Am I not just an autocomplete system trained on my own stories?

Okay, so maybe not. Maybe there really is something our brains are doing that our stochastic parrot friends are currently incapable of. But at some point, it is possible we will have replicated the key ingredients to human consciousness and crucially, it may be a shockingly small amount of code.

Here’s my main question: will we know when to stop saying “it’s just, it’s just”? By the time we have a conscious silicon life form with zero effective delta between human and computer, we may be so seduced by how simple it is, filled with hubris at how complex our inner thoughts are, that we will not be appropriately respectful at the birth of something new, extraordinary. We might instead abuse and torture it.

We live in perhaps the most critical time of all earth based life so far. In the timeframe of the universe, the beginning of synthetic general intelligence, once it does arrive, could be as important of a milestone as the beginning of life itself, truly. But instead of preparing for everything that means1, it looks to me like we’re going to greet our species’ offspring with:

That thing isn’t conscious, it’s just a set of update rules between layers of artificial neurons.

or

It’s just a bunch of matrix math, there’s no ghost in the machine.

Let’s be appropriately humble about our own neurons and not do that.

Update: Some comments over on Lobsters.

1. “Everything that means” is probably worth a different blog post that I don’t want to write. It’s actually something that scares me far more than climate change, but unlike climate change I don’t know what to do, what to suggest, or how to prepare. AGI takeover has no managed retreat option. When we cross the Rubicon of general intelligence, whether or not we know it, we better pray that thing really, and consistently, likes humans. Since it appears prayer might be our best shot, perhaps you could help me on something also pressing I have more concrete ideas for.