Continuing the conversation with Stan, I wanted to start with some working definitions of terms, to give us better odds of understanding each other. After I put together the list, I checked that these definitions are in fact dictionary-compatible. The word "natural" had almost too many different dictionary definitions, so high risk of misunderstandings around that word.
- brain - the physical organ commonly known by that name
- thought - as a particular ("a thought"): an idea or set of ideas, usually can be expressed in words
- in general ("thought" as phenomenon): the ability to reason or to introspect
- consciousness - state of being aware of existence and surroundings; may also include self-awareness and awareness of internal states
- mind - collection of all mental functions such as thoughts, consciousness; considered by some as "The Brain, collected works".
- material - physical (has mass and takes up space)
- natural - operating according to laws of nature (physics, chemistry, biology, electromagnetics, etc)
So with that as a handy reference point, I wanted to talk to Stan some more. He was saying:
It is not clear at this point how the progression leads to the concept that mind is purely physical,Good point; I should state my premises more explicitly. Background: I use computers as a working analogy, and as shorthand to talk about whatever functions of the mind can be reproduced (or mimicked, if you'd rather) by purely natural processes. Computers work in purely natural ways. So whenever we get to the point of demonstrating that a certain function of the mind can be done by a computer, we have shown that function can be done in a purely natural way. And I rely on Occam's Razor from there: once we have a sufficient explanation, that's our "definition of 'Done'".
And then it looks like the examples I picked of "how to analyze thoughts" were aspects / approaches that address what I'm interested in, but don't correspond to Stan's own interests. Stan, am I understanding you correctly that what you're really looking for is whether a thought itself (e.g. "Almost time for dinner, wonder what I'll have tonight") can be found somewhere physical in the brain? Or am I not grasping your question yet?
And Stan wants to know about my hypothetical system, an AI system that I've sometimes considered how I'd design. As a preface, I should mention: this is way beyond-scope of my original post arguing that rational thought can be handled by a natural system. Not only does it get beyond "thought" but it also gets beyond "rational" (see below). Still, it's interesting, so let's do it anyway. Stan was asking the following (and I'll vary font color and respond in line there.)
While I’m sure you could code curiosity into a deterministic serial machine, can you code in creativity (To some extent. To a computer, that's "looking for combinations or applications not yet in your own data set". Imagine an AI system with a respectable starter data set -- similar to what we try to do for kids in school -- and a way to expand on it (e.g. library, internet, wikipedia), and a way to apply it.) followed by realization? (Realization depends so much on having a big picture of the significance of things. In order for something to be more significant than just information, that involves having motivations and values and priorities. I could get a computer to recognize that it had found something new. ("Record does not exist.") But whether it could judge if that was trivial or important would depend on how much perspective it had on the outside world and the world of need.) Do you really think that you can code in every human relationship (Lol, I seriously doubt that. So many human relationships are based on our biology, etc.), desire (Again, the non-living nature of the computer would handicap it. Desire is generally based on need.), lust (I'd put that down to animal nature in humans; hormone-based and not 'rational thought' even in us.), passion (If you mean 'lust' see above; if you mean being passionate then computers do the 'single-focus/driven' routine really well, if that's the kind of passion you had in mind. Though again I'd draw the boundary at whether they had any investment or personal stake in the outcome. Computers, as we know them, have no skin in the game. It's a handicap.), intellectual neediness (If that works out to 'thirst for knowledge', that would probably have to be an explicit instruction to them, since non-living things don't have motives.), intellectual fallacy due to improper axioms acquired by voluntary ideological bias (yikes, that's definitely not rational stuff in humans, so I'd hope to steer well away from that. Though it might be interesting to model it, if doing an AI model for use in psychology. Anyway, it almost seems like you're arguing it would be an advantage for a computer to be bad at thinking -- to match humans at our worst. I can only suspect I've missed one of your goals here.), or need for belonging, or fear of rejection (Needs and fears about belonging/rejection are basically the territory of living things that are also social creatures. There would have to be a whole community of distinct AI beings for that to be feasible and those AI machines would need actual stakes on the outcome in order for 'need' and 'fear' to apply. I'm not sure of how there could be stakes on the outcome.)? Is there nothing about your own job which an algorithm cannot perform just as well? Background: For the code I've written, aside from the "re-usable" standard, I've also written code-generator programs for some predictable / repetitive code. So there are sets of programs that were not written directly by me, but were generated by a program that I wrote. And there are other people at my company who have also written code-generators since they are time-savers. My prior job likewise. I don't think it's particularly unusual. So as far as being a coder, there are parts that are disturbingly machine-like and may be automated eventually. Still the systems are written for humans to use, and so I expect there will always be some advantage to having humans make the decisions.Stan continues:
You have to presume mental behaviors to be either (a) algorithmic or (b) huge full featured non-algorithmic programs with nearly infinite branching or (c) self-modifying on the fly, all the while not self-destructing (too often, anyway). Or maybe there is some sort of parallel programming you know about that I don’t. If so, please explain.I expect mostly (a) and (c), with balancing mechanisms for resolving internal conflicts. From our point-of-view, it's priorities and values and -- as a significant part of that -- self-perceived identity. ("Am I the kind of person who would ____?")
Stan: Please give an example of the ability to create new processor instructions.Imagine an AI system that could read source code libraries, and could add new functions to itself that it found there. Or if it had the ability to redefine one of its existing functions, if it wanted to be able to make a minor mod. E.g. picture a chess app that could read other chess apps and search them for features it didn't have. Of course it would have to be told that it should try to do that: a non-living thing has no reason to care about chess.
Stan: Unless that means that the new instruction for the processor is actually a combination of instructions the processor is already designed to handle at the level from machine codeI think it's a "given" that an AI system would be hosted on some particular machine, and that any new code would have to be compatible with the processor on that machine.
Stan: At this point I’m still not sure what you’re trying to say: is a mind/thought a completely physical thing, with the universal attributes of mass/energy existing in space/time? Or are you saying that the mind/thought merely uses the brain as a physical platform for operating in the physical realm?I'm saying that mind/thought uses the brain as a physical platform, and that as far as I can tell, mind and thought occur in ways consistent with them being natural.
I'm not quite sure whether we agree or disagree since we may not have shared definitions, but it's been a good conversation. I know I've imposed a lot on your time; let me know if you're interested in continuing.