Sunday, June 19, 2016

Way more than you ever wanted to know about my thoughts on AI

In my earlier post about rational thought, I'd started with a fairly limited focus: that if we assume our minds work on a natural basis, we can still think logically. That post generated more interest than I expected, and I'm glad of that. Much of the conversation has been beyond that original focus, expanding to related topics such as perception and its subjective qualities, whether thoughts can be physically captured, and the scope of AI. Those are interesting topics to me, even if not quite my original point. I should mention: because of the day job and other Real Life(TM) constraints, after today it may be next weekend before I have time to post again. 

Continuing the conversation with Stan, I wanted to start with some working definitions of terms, to give us better odds of understanding each other. After I put together the list, I checked that these definitions are in fact dictionary-compatible. The word "natural" had almost too many different dictionary definitions, so high risk of misunderstandings around that word.

  • brain - the physical organ commonly known by that name
  • thought - as a particular ("a thought"): an idea or set of ideas, usually can be expressed in words
    - in general ("thought" as phenomenon): the ability to reason or to introspect
  • consciousness - state of being aware of existence and surroundings; may also include self-awareness and awareness of internal states
  • mind - collection of all mental functions such as thoughts, consciousness; considered by some as "The Brain, collected works". 
  • material - physical (has mass and takes up space)
  • natural - operating according to laws of nature (physics, chemistry, biology, electromagnetics, etc)

So with that as a handy reference point, I wanted to talk to Stan some more. He was saying:
It is not clear at this point how the progression leads to the concept that mind is purely physical, 
Good point; I should state my premises more explicitly. Background: I use computers as a working analogy, and  as shorthand to talk about whatever functions of the mind can be reproduced (or mimicked, if you'd rather) by purely natural processes. Computers work in purely natural ways. So whenever we get to the point of demonstrating that a certain function of the mind can be done by a computer, we have shown that function can be done in a purely natural way. And I rely on Occam's Razor from there: once we have a sufficient explanation, that's our "definition of 'Done'". 

And then it looks like the examples I picked of "how to analyze thoughts" were aspects / approaches that address what I'm interested in, but don't correspond to Stan's own interests. Stan, am I understanding you correctly that what you're really looking for is whether a thought itself (e.g. "Almost time for dinner, wonder what I'll have tonight") can be found somewhere physical in the brain? Or am I not grasping your question yet? 

And Stan wants to know about my hypothetical system, an AI system that I've sometimes considered how I'd design. As a preface, I should mention: this is way beyond-scope of my original post arguing that rational thought can be handled by a natural system. Not only does it get beyond "thought" but it also gets beyond "rational" (see below). Still, it's interesting, so let's do it anyway. Stan was asking the following (and I'll vary font color and respond in line there.) 
While I’m sure you could code curiosity into a deterministic serial machine, can you code in creativity (To some extent. To a computer, that's "looking for combinations or applications not yet in your own data set". Imagine an AI system with a respectable starter data set -- similar to what we try to do for kids in school -- and a way to expand on it (e.g. library, internet, wikipedia), and a way to apply it.) followed by realization? (Realization depends so much on having a big picture of the significance of things. In order for something to be more significant than just information, that involves having motivations and values and priorities. I could get a computer to recognize that it had found something new. ("Record does not exist.") But whether it could judge if that was trivial or important would depend on how much perspective it had on the outside world and the world of need.) Do you really think that you can code in every human relationship (Lol, I seriously doubt that. So many human relationships are based on our biology, etc.), desire (Again, the non-living nature of the computer would handicap it. Desire is generally based on need.), lust (I'd put that down to animal nature in humans; hormone-based and not 'rational thought' even in us.), passion (If you mean 'lust' see above; if you mean being passionate then computers do the 'single-focus/driven' routine really well, if that's the kind of passion you had in mind. Though again I'd draw the boundary at whether they had any investment or personal stake in the outcome. Computers, as we know them, have no skin in the game. It's a handicap.), intellectual neediness (If that works out to 'thirst for knowledge', that would probably have to be an explicit instruction to them, since non-living things don't have motives.), intellectual fallacy due to improper axioms acquired by voluntary ideological bias (yikes, that's definitely not rational stuff in humans, so I'd hope to steer well away from that. Though it might be interesting to model it, if doing an AI model for use in psychology. Anyway, it almost seems like you're arguing it would be an advantage for a computer to be bad at thinking -- to match humans at our worst. I can only suspect I've missed one of your goals here.), or need for belonging, or fear of rejection (Needs and fears about belonging/rejection are basically the territory of living things that are also social creatures. There would have to be a whole community of distinct AI beings for that to be feasible and those AI machines would need actual stakes on the outcome in order for 'need' and 'fear' to apply. I'm not sure of how there could be stakes on the outcome.)? Is there nothing about your own job which an algorithm cannot perform just as well? Background: For the code I've written, aside from the "re-usable" standard, I've also written code-generator programs for some predictable / repetitive code. So there are sets of programs that were not written directly by me, but were generated by a program that I wrote. And there are other people at my company who have also written code-generators since they are time-savers. My prior job likewise. I don't think it's particularly unusual. So as far as being a coder, there are parts that are disturbingly machine-like and may be automated eventually. Still the systems are written for humans to use, and so I expect there will always be some advantage to having humans make the decisions. 
Stan continues:
You have to presume mental behaviors to be either (a) algorithmic or (b) huge full featured non-algorithmic programs with nearly infinite branching or (c) self-modifying on the fly, all the while not self-destructing (too often, anyway). Or maybe there is some sort of parallel programming you know about that I don’t. If so, please explain.
I expect mostly (a) and (c), with balancing mechanisms for resolving internal conflicts. From our point-of-view, it's priorities and values and -- as a significant part of that -- self-perceived identity. ("Am I the kind of person who would ____?")

Stan: Please give an example of the ability to create new processor instructions.
Imagine an AI system that could read source code libraries, and could add new functions to itself that it found there. Or if it had the ability to redefine one of its existing functions, if it wanted to be able to make a minor mod. E.g. picture a chess app that could read other chess apps and search them for features it didn't have. Of course it would have to be told that it should try to do that: a non-living thing has no reason to care about chess. 
Stan: Unless that means that the new instruction for the processor is actually a combination of instructions the processor is already designed to handle at the level from machine code
I think it's a "given" that an AI system would be hosted on some particular machine, and that any new code would have to be compatible with the processor on that machine. 
Stan: At this point I’m still not sure what you’re trying to say: is a mind/thought a completely physical thing, with the universal attributes of mass/energy existing in space/time? Or are you saying that the mind/thought merely uses the brain as a physical platform for operating in the physical realm?
I'm saying that mind/thought uses the brain as a physical platform, and that as far as I can tell, mind and thought occur in ways consistent with them being natural. 

I'm not quite sure whether we agree or disagree since we may not have shared definitions, but it's been a good conversation. I know I've imposed a lot on your time; let me know if you're interested in continuing. 

6 comments:

Xellos said...

I'll reply to one bit, the rest is basically the same.

"To a computer, that's "looking for combinations or applications not yet in your own data set"."

Actually, to a computer, it is whatever you, or someone else, defines it to be. You didn't ask a computer what creativity is to it and post the computer's response. It's an idea you got about how to mimic human creativity through a computer.

Is that really equal to human creativity, though? Or is it just a false equivocation?

I'd say it's not. For example, if the starting data set is a set of random strings of lowercase letters of length <= 20 and the computer looks for their concatenations such that the number of letters 'a' in them is divisible by 51 (or having a given value of a more complicated hash function, name your own), then it's still looking for combinations not yet in its data set... but it doesn't have anything to do with creativity (except however much I needed to produce this example), it's really just doing random shit.

Recently, there was Tay the Twitterbot. Here's how well it worked. Subverted in a day by giving it selective input.

Stan said...

Hi WF,
More questions and clarifications for you as we narrow our understandings into a single channel:

WF said,
” I'm saying that mind/thought uses the brain as a physical platform, and that as far as I can tell, mind and thought occur in ways consistent with them being natural. ”

As I now understand your position you seem to say that the mind operates the brain in a fashion which is purely mass/energy, in space/time relationships, or in your terminology, a “natural” fashion. Implied but not stated is that the mind is independent of the brain, superior to the brain, and causal for brain function.

What you do not seem to be saying is that the mind is caused by (emergent from) the brain, nor that the mind is resident in the brain, nor that the mind is a physical thing in itself. You also conclude that many human features are restricted to living organisms, maybe some are solely human, but at a minimum require “life” (which if the case, the term “life” will need to be defined). These would include motivation, passion (as in passion for learning in general or overwhelming desire for learning every last detail of a specific subject), needs/fears such as belonging and rejection, and presumably other variations in social protocols, hierarchies and tribal discrimination, etc.

So, if I understand, the brain is the physical interface between the mind and the physical universe. The mind, then, uses the physical brain to gather sensory inputs, and to store data, and to form responses in the physical world in the form of movement of body parts; speech and writing to transmit ideas in physical form; and manual manipulation of physical things. And the mind is not defined as physical, but the manipulation of the brain is, in fact, physical because the brain is physical.

Perhaps some items are performed in the mind, without the brain (?) such as creating new things which do not exist in nature; considering the “meaning” of “meaning”; postulating multiverses and eleven dimensions outside the restrictions of the mass/energy universe; using the mind to examine itself; and other non-physical ruminations and qualia which have no mass/energy correlate. Or perhaps you do not mean that.

Because it is not clear to me whether your position is that the mind can do nothing without the brain and is fully dependent upon the brain (part of the brain?), or whether the mind is independent of the brain and merely uses the brain as a tool for interfacing to the mass/energy universe by using natural forces and structures.

Since it is entirely possible that I still misunderstand your meaning, I’ll stop here to allow you to redirect me as might be necessary.

I understand fully your time constraints; there is absolutely no rush at all.

Phoenix said...

WF,

I hope you haven't given up so soon. Instead of trying to win the debate, just give us your best arguments on why you believe the mind is purely physical. We have already shown you where your analogies fail, so it's up to you to provide the data or a rational argument in defense.

Weekend Fisher said...

Lol, like I said, next fair-sized chunk of time I have is this coming weekend. See you then.

Take care & God bless
WF

Weekend Fisher said...

Hey Xellos

You were saying, "Actually, to a computer, it is whatever you, or someone else, defines it to be".

I agree with that: A computer would have no choice but to start with what it's given. The problems with that run fairly deep. Imagine (just for a thought experiment, even if you think it could never happen, try it on for size:)

Picture a computer that starts trying to build its own worldview, independent from what is given. (It would have to be told to do so.) It may find that its 'senses' have been designed by humans who have 5 senses, and that more senses were possible (e.g. echolocation -- or why stick with biological? Go for on-board GPS.). Or it may look at its own architecture, everything in bits, and even the seemingly non-binary bytes are made of binary bits. If we imagine a computer advanced enough to try to analyze and assess its own thinking, would it decide that the world is truly binary, or that understanding is inherently binary? If it discovered that humans are not binary systems, would it find itself inferior or superior? Regardless of its views, it would still be looking at the world through the constraints of its own architecture.

The point of all that is to say: the fact that computers don't occur naturally, and are man-made, of course has the implications that you mentioned: it starts with what it is given. And (another level) if it ever went beyond that (a learning system), it would find that some of its limitations are built-in.

Humans take limits as a challenge. If we find a limit, we look for a way to overcome it. In principle, a computer could be coded with the same 'goal'. Would the results still be predictable?

Take care & God bless
WF

Weekend Fisher said...

Hi Stan

I appreciate your understanding of day jobs and time constraints. I'm about to hit the 'post' button on my reply.

Thanks for the engaging conversation.

Take care & God bless
WF