Sunday, July 17, 2016

What is the most destructive force on the human mind?

I have been watching some people near-and-dear to me self-destruct lately. It is a terrible thing to see loved ones self-destructing, or trying to destroy each other. And it is hard to get untangled from a group that has turned on itself. I'm trying to get my head wrapped around all the drama here. In the mind of each of the people self-destructing or trying to destroy someone else, one of the others in that group is the villain. There is no doubt that each of them has played the villain in the others' lives -- sometimes really savoring the role, too.

But -- including all of humanity here -- what turns us into villains? The current contenders in the local drama are: 
  • Addiction
  • Hatred
  • Vengefulness
  • Self-righteousness
  • Pride
  • Greed
It's hard to pick the single most destructive thing. Is self-righteousness the enabler for vengefulness and hatred? And one thing that troubles me: the people who are acting in spite are completely sure that they are in the right. And then there is hatred, the foundation for so many evil acts. Is "vengefulness" -- the desire to hurt someone else -- really anything but another name for hatred? But "vengefulness" makes a claim to being right. It uses the victim card to claim unlimited reparations or retaliation. Does it take self-righteousness to give moral top-cover to cruelty?

We're all of us prone to justify our own mistakes. We want to be good and right, and that tendency can be corrupted. We've all known that temptation to justify a mistake rather than admit it. And if we have someone waiting to pounce on a mistake, we are less likely we are to admit it because the price tag is so high.

It's so easy for someone to appoint themselves the Accuser for someone else who has wronged them. In the Bible, the Accuser is a title for Satan. Do we willingly take up that role?

I see so many things that I consider to be poisons. The antidotes are love, humility, and forgiveness. But is it possible to reach someone who is caught up in a spiral of escalating vindictiveness? Can they even see that, in their determination to top the other person, they are poisoning themselves? I've said it before and will likely say it again: We cannot dehumanize other people without dehumanizing ourselves.

Sunday, July 10, 2016

Physics, Biology, and the Mind

This continues a conversation with Stan about whether the human mind works by natural means (where I have the "pro" side, and he has the "con" side). 


Some premises

I'd like to start with something that seems to be an axiom in your understanding of the mind, starting with your comment:
... it exists outside and beyond the four known physical forces (note 3), which would make it non-physical, non-material ... (Stan)
Your premise here seems to be that, if a thing is not caused by the four forces of physics (gravity, electromagnetism, weak subatomic, and strong subatomic) then it is non-physical, and non-material. I'd strongly disagree: biology is not accounted for here. Much of life in general works outside the four forces of physics. If a flea hops, there's nothing in the four forces that made it hop, so there's something more going on than the four forces -- but there may not be anything more going on than instinct.  (Another working definition: let 'instinct' be the motives and reactions that are hardwired into a living thing.) If we want to explain something as simple as a worm wriggling, we need something more than the four forces of physics, but we're not looking at something non-physical or non-material either.  So I don't think that "outside the four forces of physics" means that we are working in a realm that is "non-physical, non-material" by any stretch; it may instead be covered by biology.

I'd also like to start with something that is basic to my own understanding of reason as it is generally used:
'Reason' tells us the reasons why the thing we want is right. 
In general, I think people often attach themselves to a conclusion (or a goal, or a side) first, and then set the mind to work to justify what was already desired. Some people want truth; some people want dominance, acceptance, prestige, luxury, or a really nice dinner. When people make a decision, even the "rational exercise" of making a pros-and-cons list usually has the "pros" list cataloging "how does it benefit me personally", and the "cons" list considering "how does it harm me personally." Consider how many rational decisions are, in effect, rational self-interest. (Take an example when certain atheists argue that Christianity grew and thrived because it has positive adaptive characteristics that helped people survive better. As soon as you answer, "So, you're saying that Christianity grew because it is a positive and helpful force?" some types of atheists will change tack with dizzying speed, if all they were looking for was a stick with which to beat Christianity.)

Marking territory and the desire to conquer Europe

You mentioned the desire to conquer Europe as an example of an irrational thing that humans do.
shows the necessity of the software's ability to generate irrational adherence to fallacious pursuits, ideologies, and subjective opinion over fact, because that is part of the human mind, too.
In some cases (e.g. the desire to conquer Europe), irrationality seems to track to our more animal natures. Computers, without an animal nature, would "lack" that irrationality. (Is the lack really on their side, here?) I would never expect computers to duplicate an animal need for dominance or territory. Neither do I see our animal need for dominance or territory as some sort of proof that we are 'more than physical' in our minds; I'd say it's proof that we are less than rational. Dominance and claiming territory are expressions of animal instincts.

So I don't see validity to claims along the lines of "computers don't need to feed/fight/flee/mate, so that proves humans are more than physical" ... On the contrary, I'd think it shows that humans are so physical that our instincts hijack our better judgment, and it can interfere with our minds' trustworthiness. There's definitely something more going on than the forces of physics, but it seems to be something animal / biological.

The scope of the proof

You were saying:
if the human mind is to be shown reducible to software, thereby demonstrating that the human mind is likely to be merely physical in nature, then the software must demonstrate the ability to produce all (sum total) of the processes which are available to the human mind ... (Stan)
I'd disagree because of the animal / biological features of our mind. That is to say: The human mind also includes things I'd attribute to biology (e.g. animal instinct). So I'd say that a software system as analogy for the mind would need to account only for those items not accounted for by biology. One or both of us have mentioned biological things like feeding, fighting, fleeing, mating, relationships, motives / desires, and instincts. Those whole arenas of the human mind are the province of living things, as far as I can see. Anything we'd attribute to hormones and adrenaline are biological and physical, yet outside the scope of what a computer would or could do.

My main area of interest is rational thought, which is the original topic on which I'd posted, where our conversation started. I'm also interested in some follow-up on topics you mentioned that intrigue me (e.g. the all-too-common misapplication of Bayes Theorem, or the nature of creativity).

However, if your view is that a software program would need to duplicate even biological instincts, because you believe the scope of the proof would need to include biology because biology is not covered by the laws of physics, then we'd have a fairly insurmountable difference of opinion on the scope of the proof for this topic. Though we might have finally found one thing on which we agree: I doubt that software would manage to duplicate the effects of our animal instincts.

Monday, July 04, 2016

The determinism of logic, and the desire for understanding

This continues a discussion with Stan on how minds work, and to what extent the mind works by natural processes. It picks up with Stan's most recent post and moves on from there.

What if two people were arguing, and had something to prove? Let's say they wanted to do it logically. Sooner or later, they're likely to use a syllogism. Maybe someone would begin like this:
1. "All humans are mammals"
2. "All mammals are vertebrates"
Therefore ...?
I'll come back to that in a moment. First, to clear up a few things that you (Stan) had mentioned:

1. On digestion: you'll notice on a close reading of my prior post that I'm not saying that the stomach is uninvolved in digestion. Though if someone were to ask, "At what point did such-a-nutrient from your lunch become available (etc) ... and what was the corresponding stomach change?", we might have a difficult time pinpointing a stomach change, even though there is no doubt in either of our minds that it's a natural process. The point of this analogy is that, even when we're in agreement that there's a physical system doing 100% of the work, it's still not a simple thing to look at the enduring organs and pinpoint the physical change that corresponds to a particular event, especially when there are layers of processing (such as enzymes). So that if someone asks me, "Show me the physical change in the brain that corresponds to you wanting a peach with lunch" I don't know how much success we'd have there, but that obstacle does not make me think that there's something beyond-the-natural about wanting a peach. We'll come to more interesting examples than peaches shortly.

2. I want to clarify that humans are not analogs of computers, and I'm not saying that we can replicate humans using computers. I expect we can replicate rational thought using computers. Rational thought is one of the easier things to produce deterministically such as in a computer (with the usual acknowledgments that of course we designed the computers in such a way as to make that happen.) To be clear, here, "rational thought" is distinct from irrational thought, creativity, motivation, emotional bonding, and various other kinds of human behavior.

By the point that you (Stan) are contending that we are not automatons, that we do not have automatic responses to all inputs: on that we agree.

Stan:
It appears to me that the analogs presented are too small, too limited in scope to reflect the actual range of the exquisite capabilities of human minds and intellect

I'd agree about the scope of what I'm saying. I know I've fielded questions on everything from lonely computers to AI bonding, but for my own first argument I'd set out with a more focused scope: "rational thought". I see "rational thought" as things where we can break down our understanding to the level of syllogisms. If you look at the history of humans trying to decide when a thing is proven, syllogisms are forerunners of computer programs. More on that shortly; I wanted to respond to more of your points.

Where we might part company again in some ways (but not others) is where you say,
There is nothing known to physically exist which has the range of capability of the human mind, and which would serve as an adequate analog; the mind is superior to all other systems because it is unfettered by dependence on physics and cause and effect.
You seem to take our range of capability, the fact that we're not automatons (to that point I'd agree), to mean that there's something immaterial going on there, something more than the brain function of the mind. Which brings us to your cat, showing curiosity ...
Because it was an act of intellectual curiosity, his actions were clearly outside the domain of deterministic cause and effect acting on initial conditions.
Are you sure? It seems likely that "intellectual curiosity" is hardwired into brains of a certain complexity. Let's say somewhere above the "flatworm" level but below the "house cat" level, curiosity becomes a fairly standard trait. At some point, animals reach a level of advancement where there's some benefit if it understands more of its world. I'm not convinced that we're outside the domain of deterministic cause and effect at this point ... which I'll explain more as we get deeper into your responses and mine in return.

You also used hardware/software as an analogy for dualism. But software works deterministically. If you're ok with the mind being a deterministic system ... I think it's more likely that I've misunderstood you somewhere, or that I'm reading too much into the choice of analogy, & you were using a convenient one that we'd already discussed.

So (hoping I've cleared up any miscommunication to this point, and with those questions in mind) let me move us onto new ground somewhat:

What if two people were arguing, and had something to prove? Let's say they wanted to do it logically. Sooner or later, they're likely to use a syllogism. Maybe someone would begin like this:
1. "All humans are mammals"
2. "All mammals are vertebrates"
Therefore ...?
Rational thought -- the type that can be broken down into words and expressed in syllogisms -- is a special category in this way: it is defined by the fact that everyone can and should get the same results, given the same input. "Rational thought" is fairly deterministic in its own way (not the physical way). 

If we work through that yawn-inducing syllogism as an example, then anyone who is following the syllogism will come up with the same conclusion from there. "Rational thought" is a specialized area of thought where we do try to show that a certain train of thought must have a fixed outcome. Given certain inputs, we must get a certain result. You even rely on the determinism of logic, and appeal to the determinism of logic, when you say things like:


Therefore Scientism cannot be the case, and it must be false. (emphasis added)
Your appeal is to the determinism of logical argument: Therefore (based on the given input) the outcome must be determined. It's why rational thought is fairly compatible with deterministic systems like computers.

Our claim to rationality, to having other people recognize the validity of our own logic, is based on the "rules of logic" being deterministic after all (after their own rules ... again, more shortly). When it comes to rational thought, if a certain outcome or conclusion weren't inevitable based on the input, would there be any basis for rational proof?

So I'd submit that the act of having a rational discussion -- of presenting evidence and arguments, and expecting them to be taken conclusively --is an acknowledgment that rational thought is in fact supposed to have a predetermined outcome. I'll go one more step: it's not a problem for logic or for rationality that the outcome is predetermined; in fact that inevitability is its main claim to validity.

There's a danger that the words related to determinism can be misunderstood when applied in the two different scenarios. I'd like to draw attention to the fact that, in rational thought (as opposed to, say, laws of motion) the cause of the pre-determination has changed from external physical forces (such as gravity)
to principles of logic. If the outcome of a proof were determined based on processes which operate in ways that are indifferent to the rationality of the outcome (such as gravity), the determinism would be a problem. But here the deterministic nature of the process is logical determinism, not physical determinism. It doesn't actually compromise the rationality of the outcome; in fact the deterministic nature is at that point the guarantee of rationality. In a well-constructed argument with all the facts in hand, there is only one possible conclusion; that is the whole basis of the claim for others to accept a line of logic.

You speak of human minds in a way that assumes (to use your words) "violations of the laws of physics".
I doubt that human minds violate the laws of physics any more than computers violate the laws of physics when the electricity in them jumps through hoops (figuratively speaking) to perform calculations that it wouldn't do except that it's executing instructions to perform some calculation or other function.

You continue:
Both digestion and computers are limited to the predetermined responses of which they are capable ... In order to make the case, it seems that a reason (or reasoning) must be found for the existence of non-determinism in the mind, when the entire physical universe other than the mind is deterministic and obeys laws which have been discovered by physicists, and which are necessary and sufficient for the entire universe, save minds.
Let's start with a few basic assumptions, even if only for the sake of argument. If we suppose that our minds have:
  1. The desire to understand the world
  2. The concept of good / better
  3. The desire to exercise our own agency
Then (to my thinking) that covers the areas of the human condition that we've discussed.

We haven't really gone in-depth on the topic of desire and its place in the question "Is this determinism?" Desires are particular to living things and so are a distinct feature of the living. They are a force of a certain kind, though not in the same sense as when we discuss physics or electromagnetism. I suppose I should mention my working definition of "desire": it is a motive (something that causes action) that may be felt as a need, and attaches to one or a series of goals/objects to satisfy it. That much said: Desire can have a very physical basis.

And that is a good point to pause because here's a question that would affect the course of our conversation: Do you see "desire" as a deterministic thing? Where do you see it fitting into the picture that you outlined before:
when the entire physical universe other than the mind is deterministic and obeys laws which have been discovered by physicists, and which are necessary and sufficient for the entire universe, save minds.
In your thinking, does the mind include desire as one of the inexplicable non-deterministic things?

Sunday, June 26, 2016

Minds and Motives

Stan - I appreciate your understanding of the situation with the day job and real-life time constraints. Here is my next follow-up on our conversation. For this round, I've organized our conversation under these headings: Clarifying our previous conversation, Motivation, and Who or what operates the brain?, as our main current topics.

Clarifying our previous conversation
In your comments you restated my position, showing some places where I should clarify. Let me start there:

I would not say that 'the mind operates the brain'; I would say that the brain is the basis for the mind. I doubt that there is anything that the mind does independently of the brain. I'll explain with some analogies to see if it helps communicate the point. To give an analogy using digestion as a comparison: there's a lot that happens in the stomach, though you wouldn't necessarily see a change in the stomach itself for every change in its contents because there are things like enzymes involved. In the same way, there are things that happen in the brain where I'd suspect, when we look at the the mechanisms, some of them will as transient as our thoughts. Or if we use a computer analogy, the brain is something like hardware and the mind is something like software ... in some ways more like the Operating System or even like BIOS. If anyone reading along wants a short intro to BIOS: it is very low-level software that underpins even the operating system, and is used by the operating system. BIOS is barely above the hardware level, and comes pre-loaded on the hardware, regardless of which operating system is installed over it. I think the most basic brain functions -- like trying to make sense of the world -- are comparable to BIOS. We see early versions of understanding in dogs and cats, though not as fully-developed as in humans. We come back to a closely-related question in the last section of this post, so I'll leave further comments until then.

Motivation
As far as I can tell, motivation requires life. (You're a theist; what's the difference between God and 'the Force'? I see the difference as awareness and motive. Motive implies having a stake in the outcome. And the questions, "What are God's motives?" and "Why does God even have motives?" are some interesting questions in philosophy of religion.) When it comes to computers and artificial intelligence, maybe I should say specifically that I doubt they could have self-motivation, in that I don't see how they could have a stake in the outcome. A computer could be given a motivation. It might even have a motivation built into its system, like a hypothetical chess-bot with instructions to analyze other chess programs to find logic with the highest win-percentage, or most efficient code for getting there. But that 'motivation' would come from outside because it's not living. More follow-up on that next.

Look at human motivation: it's generally to meet some kind of need or fulfill some kind of desire. (Is there more that goes into 'motivation'? Let's at least start there.) What 'need' does a computer have? You could argue 'Electricity' ... but if the power goes out, it doesn't destroy the computer. It cannot sense pain so wouldn't seek to avoid it. It doesn't even have a concept of itself, so does not think about the day that its hardware fails and it's taken to a recycling center. It doesn't have any commitment to an idea of its own superiority, so it doesn't go around trolling. It doesn't have a desire for competence/mastery. (Ever notice the satisfaction we get from competence/mastery? We have an emotional investment, even in understanding things.) So we could probably give a computer motivations by building it into the instruction set. But I wouldn't expect motivations to arise independently in a being with no wants or needs or desires or self-concept. By the way, it interests me whether God gave us the desire to understand which, taken to its ultimate limits, leads us to reach out to him. There's that old quote from St Augustine, "Our hearts are restless until they rest in You."

So after we talk about motivations, we get back to an interesting question that you introduced: 

Who or what operates the brain?
You introduced the background question of who or what operates the brain. That's a good conversation to have, so let's go there next. I'm going to start with the old digestion analogy just so we have a starting place where I'm hoping we both agree: the stomach and digestion are basically automatic. That is to say, nothing really 'operates' that system except built-in biological functions. I think there is something analogous in the brain/mind where we have a built-in function of trying to understand and make sense of the world. Previously I talked about how there are animals that we wouldn't consider to be very rational (worm, dog or cat) that have some level of understanding.

I'm curious ... I don't know what your view is: Would you say that a dog or cat has some kind of mind-duality going because they have a basic level of understanding? Or is duality something that begins at a higher level? Is duality just for humans, in your opinion? Does the dog's/cat's mind require duality in order to recognize you and be glad to see you? What functions do you see as needing some sort of transcendence? (Do you consider yourself a dualist? Or would you put it some other way?) I'm considering all those questions as general prompts to see what you think; feel free to pick whichever offers you the best starting point for explaining what you think.

Sunday, June 19, 2016

Way more than you ever wanted to know about my thoughts on AI

In my earlier post about rational thought, I'd started with a fairly limited focus: that if we assume our minds work on a natural basis, we can still think logically. That post generated more interest than I expected, and I'm glad of that. Much of the conversation has been beyond that original focus, expanding to related topics such as perception and its subjective qualities, whether thoughts can be physically captured, and the scope of AI. Those are interesting topics to me, even if not quite my original point. I should mention: because of the day job and other Real Life(TM) constraints, after today it may be next weekend before I have time to post again. 

Continuing the conversation with Stan, I wanted to start with some working definitions of terms, to give us better odds of understanding each other. After I put together the list, I checked that these definitions are in fact dictionary-compatible. The word "natural" had almost too many different dictionary definitions, so high risk of misunderstandings around that word.

  • brain - the physical organ commonly known by that name
  • thought - as a particular ("a thought"): an idea or set of ideas, usually can be expressed in words
    - in general ("thought" as phenomenon): the ability to reason or to introspect
  • consciousness - state of being aware of existence and surroundings; may also include self-awareness and awareness of internal states
  • mind - collection of all mental functions such as thoughts, consciousness; considered by some as "The Brain, collected works". 
  • material - physical (has mass and takes up space)
  • natural - operating according to laws of nature (physics, chemistry, biology, electromagnetics, etc)

So with that as a handy reference point, I wanted to talk to Stan some more. He was saying:
It is not clear at this point how the progression leads to the concept that mind is purely physical, 
Good point; I should state my premises more explicitly. Background: I use computers as a working analogy, and  as shorthand to talk about whatever functions of the mind can be reproduced (or mimicked, if you'd rather) by purely natural processes. Computers work in purely natural ways. So whenever we get to the point of demonstrating that a certain function of the mind can be done by a computer, we have shown that function can be done in a purely natural way. And I rely on Occam's Razor from there: once we have a sufficient explanation, that's our "definition of 'Done'". 

And then it looks like the examples I picked of "how to analyze thoughts" were aspects / approaches that address what I'm interested in, but don't correspond to Stan's own interests. Stan, am I understanding you correctly that what you're really looking for is whether a thought itself (e.g. "Almost time for dinner, wonder what I'll have tonight") can be found somewhere physical in the brain? Or am I not grasping your question yet? 

And Stan wants to know about my hypothetical system, an AI system that I've sometimes considered how I'd design. As a preface, I should mention: this is way beyond-scope of my original post arguing that rational thought can be handled by a natural system. Not only does it get beyond "thought" but it also gets beyond "rational" (see below). Still, it's interesting, so let's do it anyway. Stan was asking the following (and I'll vary font color and respond in line there.) 
While I’m sure you could code curiosity into a deterministic serial machine, can you code in creativity (To some extent. To a computer, that's "looking for combinations or applications not yet in your own data set". Imagine an AI system with a respectable starter data set -- similar to what we try to do for kids in school -- and a way to expand on it (e.g. library, internet, wikipedia), and a way to apply it.) followed by realization? (Realization depends so much on having a big picture of the significance of things. In order for something to be more significant than just information, that involves having motivations and values and priorities. I could get a computer to recognize that it had found something new. ("Record does not exist.") But whether it could judge if that was trivial or important would depend on how much perspective it had on the outside world and the world of need.) Do you really think that you can code in every human relationship (Lol, I seriously doubt that. So many human relationships are based on our biology, etc.), desire (Again, the non-living nature of the computer would handicap it. Desire is generally based on need.), lust (I'd put that down to animal nature in humans; hormone-based and not 'rational thought' even in us.), passion (If you mean 'lust' see above; if you mean being passionate then computers do the 'single-focus/driven' routine really well, if that's the kind of passion you had in mind. Though again I'd draw the boundary at whether they had any investment or personal stake in the outcome. Computers, as we know them, have no skin in the game. It's a handicap.), intellectual neediness (If that works out to 'thirst for knowledge', that would probably have to be an explicit instruction to them, since non-living things don't have motives.), intellectual fallacy due to improper axioms acquired by voluntary ideological bias (yikes, that's definitely not rational stuff in humans, so I'd hope to steer well away from that. Though it might be interesting to model it, if doing an AI model for use in psychology. Anyway, it almost seems like you're arguing it would be an advantage for a computer to be bad at thinking -- to match humans at our worst. I can only suspect I've missed one of your goals here.), or need for belonging, or fear of rejection (Needs and fears about belonging/rejection are basically the territory of living things that are also social creatures. There would have to be a whole community of distinct AI beings for that to be feasible and those AI machines would need actual stakes on the outcome in order for 'need' and 'fear' to apply. I'm not sure of how there could be stakes on the outcome.)? Is there nothing about your own job which an algorithm cannot perform just as well? Background: For the code I've written, aside from the "re-usable" standard, I've also written code-generator programs for some predictable / repetitive code. So there are sets of programs that were not written directly by me, but were generated by a program that I wrote. And there are other people at my company who have also written code-generators since they are time-savers. My prior job likewise. I don't think it's particularly unusual. So as far as being a coder, there are parts that are disturbingly machine-like and may be automated eventually. Still the systems are written for humans to use, and so I expect there will always be some advantage to having humans make the decisions. 
Stan continues:
You have to presume mental behaviors to be either (a) algorithmic or (b) huge full featured non-algorithmic programs with nearly infinite branching or (c) self-modifying on the fly, all the while not self-destructing (too often, anyway). Or maybe there is some sort of parallel programming you know about that I don’t. If so, please explain.
I expect mostly (a) and (c), with balancing mechanisms for resolving internal conflicts. From our point-of-view, it's priorities and values and -- as a significant part of that -- self-perceived identity. ("Am I the kind of person who would ____?")

Stan: Please give an example of the ability to create new processor instructions.
Imagine an AI system that could read source code libraries, and could add new functions to itself that it found there. Or if it had the ability to redefine one of its existing functions, if it wanted to be able to make a minor mod. E.g. picture a chess app that could read other chess apps and search them for features it didn't have. Of course it would have to be told that it should try to do that: a non-living thing has no reason to care about chess. 
Stan: Unless that means that the new instruction for the processor is actually a combination of instructions the processor is already designed to handle at the level from machine code
I think it's a "given" that an AI system would be hosted on some particular machine, and that any new code would have to be compatible with the processor on that machine. 
Stan: At this point I’m still not sure what you’re trying to say: is a mind/thought a completely physical thing, with the universal attributes of mass/energy existing in space/time? Or are you saying that the mind/thought merely uses the brain as a physical platform for operating in the physical realm?
I'm saying that mind/thought uses the brain as a physical platform, and that as far as I can tell, mind and thought occur in ways consistent with them being natural. 

I'm not quite sure whether we agree or disagree since we may not have shared definitions, but it's been a good conversation. I know I've imposed a lot on your time; let me know if you're interested in continuing. 

Saturday, June 18, 2016

The purpose and goal of the mind

This is a follow-up to the earlier post Why I'd like my Christian friends to consider that rational thought is a natural phenomenon. , and a continuation of the conversation posted by Stan in An Analysis of the Purely Physical Mind and Rational Thought. My thanks to Stan for his interest in the topic. Since this post is in part a response to his, "you" in this post means Stan.


There are many people who are used to this topic being discussed between theists and atheists. So I'd like to start by clearing up my position on a few assumptions or questions that people may bring to the discussion: 
  • My view of a natural mind does not take sides in whether the preconditions for the mind are naturalistic. I'm a Christian myself and believe that there is a Creator; that doesn't mean that rational thought doesn't work through natural means. 
  • To the best of my knowledge, life comes from life in all known cases
  • I have no objection to the view that the First Cause is a non-material entity
  • I have no interest in steering people into a dichotomy between the mind being either deterministically controlled or due to quantum randomness
  • I'm not arguing whether or not we came by our goals ourselves or were given them from outside. For the present, the scope is whether the mind operates wholly on natural processes. 
Stan interacted with my post at some length, including the analogy of digestion. When I read peoples' responses to my earlier post, this seems to be a main area where I didn't succeed in getting my point across, based on reading what people thought I was saying and comparing that to what I actually wanted to communicate. So I'll make my point more explicit here.

The point of the original "digestion" analogy is that digestion acts with complexity and purpose that it could not manage in isolation. It manages to fulfill a purpose or goal beyond itself as part of a larger system, though I think everyone would grant that digestion works naturally.

Applying that to the mind, let's talk about the 'purpose or goal' of the mind. The short version (indented for emphasis, so it doesn't get lost in the shuffle): 
I believe the 'purpose or goal of the mind' is to provide us with a map or model of the world in which we live, and that the mind has an innate disposition to explore and understand its surroundings. 
So if we give a rational system (a mind) enough time, it will in fact turn to everything it can find, including paradox, dilemmas, self-evidence, and the nature of comprehension. I'd say that the mind is the natural function of the brain (and associated nervous system e.g. input from the eyes), and that the mind uses natural processes for operation.

How do we get to a point where I'd say that? I'd like to start with the simplest examples and step forward through a few layers of complexity.

Example 1: A simple creature may have the ability to sense heat, or may have an eye spot (not even quite an eye). Through their senses -- and probably without anything we'd recognize as thought -- they gain a simple awareness of heat and light, and can react in self-preservation.

Example 2: A dog or cat has fairly well-developed senses for perceiving its surroundings. It has a basic working model of its home that allows it to recognize people, tell when it's time for dinner, and try to avoid trips to the vet (good luck with that). It's got a more complete set of senses and a more developed mental model of the world. They have a grasp of how to be an actor rather than just a passive reactor; they try to cue us when it's time for dinner, or when they want to play. We're inching closer to something we might recognize as a 'mind', though we're nothing like what humans can manage.

Example 3: Humans have a mind with a drive to understand the world. We go looking for information. We look for new and better ways of storing information, communicating information, and exploring for information. We look for ways to model information, organize information, and put all that understanding to good use. If we find a limit, we look for a way past it. (There's more to our minds than information, but we'll start there, since 'understanding' does involve information.)

I know there's more to be said but for the sake of brevity we'll start there. And now we're far enough along that I can interact with something Stan brought up:
"Because comprehension, thoughts, and concepts are not physical lumps amenable to be analyzed empirically ..."
Actually, just because comprehension, thoughts, and concepts are not physical lumps, that doesn't stop us from analyzing them empirically. (If we find a limit, we look for a way past it.) Here are some ways we humans have come up with to empirically analyze our own comprehension, thoughts, and concepts, and we have come up with a good variety of ways. Some tools focus on the 'understanding' part of it, and some focus on the 'brain' electrical / biomechanics of it:
  • We turn our understanding into a hypothesis. We use that hypothesis to design an experiment, and then test our understanding with that experiment
  • We turn our concepts into a syllogism to test each thought's compatibility with other thoughts in the same system
  • We turn our comprehension into images -- maps, drawings, charts, graphs
  • We turn our understanding into words, and work to grasp the world by defining each thing or idea in words
  • We turn thoughts into mechanical models like little world-map globes or solar-system models
  • We turn concepts of events and people into stories. We also use stories to evaluate "what-if?" with different scenarios, or when we want to see how or why our struggles matter and grasp how our lives have meaning
  • We try to peek inside our own minds with tools like Rorschach blots, dream analysis, word association, and other tools for analyzing our own psychology
  • We have developed brain scans to give us a more direct view on the electrical impulses and brain functions involved in our thoughts and emotions
  • We're making progress towards being able to turn visual images in the brain into computer-output, e.g. take a mental image and turn it into a photo. Based on existing early work, I expect the day will come when we can take a daydream and download it as a movie. (Imagine the job listing for that: Help wanted. Daydreamer. Detail-oriented a must. Familiarity with Synapse-Interface Software required ...) (May Google Minds never develop a Mental StreetView camera ...)

So there are lots of ways in which we model our own thoughts and concepts so that we can examine them. At this point I'm not sure which of those were of interest to you (Stan), or if there's another direction you had in mind.

One thing that persuades me that the mind works naturally is (background: I'm a professional coder with an interest in AI) that I haven't yet heard someone propose a mental function that I couldn't imagine a way of coding into a computer. (No, I don't get to do anything near that cool for my day job, which is just the usual corporate coding to pay the bills. It keeps my mind exercised, though without a ton of creative leeway. But sometimes I mull over how I would go about designing an AI system, and there are people who have done more than mull it over.) There are perfectly natural ways to code awareness, evaluative framework, even the ability for a computer to add new abilities into its own design/framework and exceed its original instruction base.

I'm hoping that makes it somewhat clearer how I see the purpose and goal of the mind, and why that all seems like natural processing to me. I look forward to hearing other peoples' thoughts. I'm glad for the interaction and interest in the topic.

Sunday, June 12, 2016

Can a human understand God? The argument from creation

I've been introduced to an atheist blog where the poster, an ex-Christian, was recently considering whether it's possible to know God. After he mulls over some time-and-eternity questions in creation, he puts forward this argument:
Besides, wouldn’t it be better for someone like God to have fellowship with an equal? Only another Creator could truly understand a Creator, right? Yes, only a deity could fully know and appreciate another deity. Created beings, which by definition are lesser beings, could never really know a deity anyway.
He considers his views to be self-evident to anyone with the most basic thinking skills ("This is Logic 101"). (Most people consider their views to be obvious and beyond rational dispute. Considering how many different starting points and premises we all have, really, very few things are beyond rational dispute.)

The reason I quote this fellow's argument is that he speaks for a lot of people, including some groups of Christians, when he says that. I think that deserves a look. 

"Wouldn't it be better for someone like God to have fellowship with an equal?" Let's say yes, and see where it goes. A problem comes up pretty quickly, though, if we go with the view of any monotheistic religion: God has no equal. But what if he could make someone who was as close as possible to equal? Sure, too late and the ship already sailed on whether the other person(s) would be eternal. And creating someone exactly like God is self-contradictory: God wasn't created, therefore any created being is already not exactly like him. But if we bracket the parts that are literally impossible or self-contradictory like that, is it possible to make a creature that is enough like God to have meaningful fellowship with God? 
So God made man in his image, like God did God make man: male and female he created them. (Genesis)
Keep in mind that I'm not arguing that an atheist should take the Bible's word that it happened; I'm interested in whether it shows God as trying to make someone like himself.

But are people capable of understanding God? The aspects of God that we can understand are in some way like us. We understand God in roles such as father, king, bridegroom, husband ... all human relationships, all understandable parts of the system in which we live. On the view that God has some role in describing himself in the Bible, God is seen as someone we can relate to. On the view that he created the world, he may also have filled it with analogies to himself that we can come to understand.

What exactly goes into the "image of God"? It's not spelled out for us. But we know at that point that God is a creator, and that he sets things in order so that the world flourishes. He also makes a garden -- and instructs the people to fill the world and rule over it. (I expect that his instructions to fill the earth and rule it, originally, would have resulted in the whole earth being made into various kinds of gardens and orchards and parks, based on Eden which is the pattern of "ruling" that people had seen.)

If God is Creator, then someone made in his image would also be a creator. Mankind is the most creative mind that we see at work on the planet. We have created works of music, painting, architecture, stained glass, literature, and drama. We have minds that are capable of imagination and fantasy. We have developed musical instruments and engineering techniques. Not many of our gardens are compared to Eden ... apparently the hanging gardens of Babylon once were worthy of notice, a masterpiece in that kind of living art.

Along the same lines: If God is in possession of all knowledge, then someone made in his image would also seek knowledge. If God is love, then someone made in his image would also be capable of love. If God is self-determining ... how far is it possible for a created being to be self-determining? And if God transcended the isolation of being the only Being like himself ... Is mankind hardwired to try to transcend our own limits?

We're lesser beings; we don't really understand all there is to know of God.
We know in part ... now we see through a glass, darkly (Paul, I Corinthians 13:9, 12)
 But there are promises that our capacity to understand will grow.
For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as I am known. (Paul, I Corinthians 13:12)