Forums > Social Discussion > The Ultimate Theory of Reality.

Login/Join to Participate
Page: ...
onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
As promised in the 'Superultimate Question' thread: -



[Old link]



I've put together my proposed answer to the question- 'why is there something rather than nothing?'.



It's here: -



https://www.geocities.com/combatunicycle/utor/utor.html



Please note before adding to this thread that quantum physics, cosmology, Hawking, the 'Big-Bang', Einstein and Schrodingers cat are almost certainly off-topic due to the fact that the 'nothing' refered to in the question is philosophical nothingness (absolute emptiness) rather than the physical 'empty space' nothingness covered by physics.



(For more on this check out the first link above where this point was extensively discussed)

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
You've misread me.

I am not saying that 'programs require . . . to be.'

I *am* saying that programs need causal processes & temporality to be _instantiated_.

And, on your own account, the mind is an _instantiated_ program (see the quote in my earlier post). My point is that there is a difference between the program 'existing' and running: and, furthermore, on your characterisation the mind is a program _running_.

You say that if UTOR is true, we've never witnessed a program taking place - but that's absurd. Of course we have. I'm not going to argue for that claim here, however. Let's suppose that you're right about this: that we've never witnessed a program taking place. In that case, your entire project is scuppered, because your theory takes some basic examples - programs running on hardware - and works up from them to a finished theory. IF your theory is correct, then the premisses on which the theory is built are false. Hence, again by reductio, the argument fails.

I remain of the view that the killer point is this:

1. Any function involves a change of state.
2. This holds for programs: the whole concept of a program centres around the notion of changes in states.
3. The mathematical realm cannot change in state (since, ex hypothesi, it consists of necessary truths).
4. Hence programs cannot be enacted in the mathematical realm.
5. The mind is a running program.
5. Hence the mind cannot be a mathematical entity.

If you think this argument is invalid, please tell me why: if you think that one or more of the premisses is false, please identify which one, and tell me why.

I should emphasise that I'm not just stating intuitions here. I'm stating platitudes: statements that are trivially true, such as 'in order to run a program, you need to be able to have a change of states.' I'm not mentioning time: that's a thorny issue, I happen to think that I've got some clear ideas about it, but you might disagree with them. Either way, they're inessential to this particular line of argument.

More specifically:

'If UTOR is true, then every mind you have met has in fact been a function of non-temporal mathematical objects.'

Let's assume that this statement is true, for the purposes of argument. In which case:

1. A function is a transformation from state to state.
2. A function of non-temporal mathematical objects is, ipso facto, such a transformation.
3. But the realm of mathematical objects does not admit of transformations, since it is composed of necessary truths.
hence
4. There is no such thing as 'a transormation of non-temporal mathematical objects'
and therefore
5. UTOR cannot be true. QED.

-----

Further problems (hang on: I've just remembered another killer point)

1. Programs are simply syntactic: they specify manipulations of symbols, such as A x B = B x A.
2. Syntax is independent of content.
but
3. The mind has content (that is, semantics)
hence
4. The program cannot describe the mind fully.
and therefore
5. The mind cannot be a mere program.

//

I've gone on long enough, so I'll leave it there for the time being.

cheers

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Yes, it sounds as though I may have misread you.



In particular- 'instantiate'. What does that term mean?



Am I correct in assuming that an instantiated program is a program that is 'running'?





-----------



Written by: quiet



You say that if UTOR is true, we've never witnessed a program taking place - but that's absurd. Of course we have.








No, that's not what I said; I said that if UTOR is true, that we've never seen a program taking place on a physical causal temporal system.



Written by: quiet





Let's suppose that you're right about this: that we've never witnessed a program taking place. In that case, your entire project is scuppered, because your theory takes some basic examples - programs running on hardware - and works up from them to a finished theory. IF your theory is correct, then the premisses on which the theory is built are false. Hence, again by reductio, the argument fails.








Like I said before, those examples are no more than steps on a ladder- there to facilitate understanding. UTOR specifically denies the relevance of any physical systems whatsoever; yet it does use examples such as physical hardware- only as steps to understanding.



They are not premises of UTOR; UTOR does not rely upon the existence of any physical objects as a premise.

-------------



The issue here seems to be that you believe that a 'mind program' requires causal activity ('running').



To me, that is simply an assumption based on your beliefs about programs running on physical hardware.



To express what UTOR is saying concerning 'mind programs' let me use an analogy.



Pi is the ratio of a circles diameter to its circumferance; the naive may be inclined to calculate it by measuring the dimensions of a physical circle.



This will give only an approximation of pi; the correct way to do it is with the appropriate mathematical algorithm.



This is a process that, when performed by humans, takes time, and yields an increasing number of digits of pi as that time passes. We can never, even in principle, calculate all the digits of pi.



This does not impinge on the slightest upon the actual mathematical entity Pi. It exists, in totality, as a timeless mathematical object.



Time does not need to pass for Pi to be.



I am saying that the 'programs' that make up consciousness, are the same kind of entitiy as pi- they do not require causal processes taking place in time.



The kinds of programs you are talking about, are analogous to the view of pi being a ratio of dimensions of a physical circle, or as pi being the result of an iterative process taking place over time.



Given what you've said, maybe it's time for me to reflect upon whether 'program' is the best term to use to describe what UTOR is talking about.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
'so, that's not what I said; I said that if UTOR is true, that we've never seen a program taking place on a physical causal temporal system. '

Apologies: I meant my statement to be elliptical for that. So, for instance, a computer is a physical causal temporal system, and I think that we see a program running on a computer, if anything. So to deny that we see computers running programs is absurd.

I think that my objection holds even if you're using examples as steps on a ladder to understanding. Metaphorically, if you've never actually seen those steps, how can you use them to gain understanding? You're taking the example of a program running on a computer, abstracting away from that to form a model of 'the mind as a program', and then denying that you can see programs running in the first place. So when you say 'the mind is a program', what you mean by 'a program' is determined by the computer analogy. But if there's nothing to talk about in the computer example, then the whole analogy falls down. Does that make sense?

---------

On the next issue, I don't think I've managed to get my point across. So, to clarify:

'to instantiate' means, literally, 'to stand in place', or 'to carry out'. I know this isn't very clear, so I'll try an example. On the functionalist view (I assume from your earlier post that you're familiar with this), things are individuated by their functions. So, for instance, a bag is something which is defined by its function - that is, it holds things. But you still need the physical object in order to carry out this function. Now, bags are what is known as 'multiply instantiatable': that is, the same function can be carried out by various different objects (e.g. you can have bags made of cloth, bags made of leather, and so on).

So you say:

'I am saying that the 'programs' that make up consciousness, are the same kind of entitiy as pi- they do not require causal processes taking place in time.

The kinds of programs you are talking about, are analogous to the view of pi being a ratio of dimensions of a physical circle, or as pi being the result of an iterative process taking place over time.'

Here, you've misread me. My point is that 'consciousness' isn't equivalent to 'a program', it's equivalent to 'a running program', or 'an instantiated program'. Why? Because consciousness involves thinking about things, and thinking about things involves transformations: one state turning into another, or causing another to come about, etc. This is completely disanalagous to 'pi as the result of an iterative process . . . '.

I'm NOT saying that programs can't exist without causal activity (although I'm a bit sceptical about that, also). I am saying that you can't run a program without causal activity - at any rate, you can't run programs in 'the timeless unchanging mathematical realm' - AND that, furthermore, the mind isn't a program, it's a RUNNING program.

----------

I'd be interested to hear what you have to say about my syntax vs semantics challenge [posted above].

The debate about whether consciousness is best compared to a program, or to a running program is slightly trickier, but I think that the syntax/semantics one is pretty fatal.

----------

One final question. Programs are't processes, but they specify processes. Now, is consciousness the process, or is it the specification (or the description) of the process? If it's the description of the process (that is to say, the program), then what is the process?

ture na sig


quietanalytic
503 posts
Location: bristol


Posted:
incidentally - and I'll wait for your response to ^^ that post before getting into this in any detail - I *strongly* recommend that you take a look at:

J Searle, 'The Chinese Room' [do a google search for it, I'm sure you'll find either the original paper or a decent summary thereof]

there's also an amazing paper by Hilary Putnam which changed my mind about the whole AI business. if you buy externalism about mental content, then his paper is the final nail in the coffin for the Turing Test.

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Firstly, I do have some knowledge of the relevant technical lingo, but it's far from complete, and I prefer to not use technical language.

This is because, during my philosophy degree, I came to the conclusion that it's use was both excluding (to non-academic philosophers), and also responsible for the lack of clarity evident in much contemporary philosophy.

My approach is that if one understands the points one is trying to convey, then they can be expressed consisely and in plain(non-technical) english.

Which is why I've not got involved in -

Written by: quiet


I'd be interested to hear what you have to say about my syntax vs semantics challenge




I don't understand it, and non-philosophers popping in to look at this thread certainly wouldn't.

If you can convey it in non-technical language I will certainly attempt to address it.

Written by: quiet



I think that my objection holds even if you're using examples as steps on a ladder to understanding. Metaphorically, if you've never actually seen those steps, how can you use them to gain understanding? You're taking the example of a program running on a computer, abstracting away from that to form a model of 'the mind as a program', and then denying that you can see programs running in the first place. So when you say 'the mind is a program', what you mean by 'a program' is determined by the computer analogy. But if there's nothing to talk about in the computer example, then the whole analogy falls down. Does that make sense?





yes, it makes sense; i just don't agree with it.

Assuming something to be the case, deriving a contradiction from that assumption, and thus establishing the falsity of the initial assumption is a standard practice in maths and philosophy.

The fact that the initial assumption is false, does not make the line of reasoning invalid.

Similarly, if UTOR assumed the existence of physical objects and then went on the show that they didn't exist- it would not necessarily be a problem.

However, UTOR doesn't actually do that. Acknowledging that people have a certain world-view, that includes for example, physical ,matter and computers, it simply uses those concepts to get its point across.
---------
Written by: quiet


Here, you've misread me. My point is that 'consciousness' isn't equivalent to 'a program', it's equivalent to 'a running program', or 'an instantiated program'. Why? Because consciousness involves thinking about things, and thinking about things involves transformations: one state turning into another, or causing another to come about, etc. This is completely disanalagous to 'pi as the result of an iterative process . . . '.

I'm NOT saying that programs can't exist without causal activity (although I'm a bit sceptical about that, also). I am saying that you can't run a program without causal activity - at any rate, you can't run programs in 'the timeless unchanging mathematical realm' - AND that, furthermore, the mind isn't a program, it's a RUNNING program.






OK, you're saying that consciousness is a 'running' program; that's clear, I understand it.

Now UTOR is saying that consciousness is simply a program- not a running program.

ie that the program of consciousness doesn't need to be 'run' for consciousness to be- that there don't need to be 'processes' running in time.

I think that's a fairly clear expression of where we're differing.

Next must come the task of deciding between those too alternatives, ie

1. consciousness is a 'running' program

2. consciousness is a non-running program

So, other than simply re-stating that consciousness necessitates change/temporality etc; do you have any grounds whatsoever that can establish it?

(I fully agree with you that our conscious experiences involve the appearance of transformations; however, UTOR says that these appearances of transformation are accomadated by the non-temoral programs that make up consciousness.

Now I'm not capable of explaining how that can happen; what I'm asking you is whether you can come up with any reasoning to show that it can't)

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Written by: quiet


incidentally - and I'll wait for your response to ^^ that post before getting into this in any detail - I *strongly* recommend that you take a look at:

J Searle, 'The Chinese Room' [do a google search for it, I'm sure you'll find either the original paper or a decent summary thereof]

there's also an amazing paper by Hilary Putnam which changed my mind about the whole AI business. if you buy externalism about mental content, then his paper is the final nail in the coffin for the Turing Test.




I'm very familiar with the chinese room; not sure how it's relevant to UTOR though.

My view is that the system as a whole is conscious.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
hang on a sec - i'm generally quite careful to avoid technical language unless i think that people know what's being talked about. so when i introduced the semantics/syntax stuff, i did explain what the terms meant. in this bit from a couple of posts ago, for instance:

'1. Programs are simply syntactic: they specify manipulations of symbols, such as A x B = B x A.
2. Syntax is independent of content.
but
3. The mind has content (that is, semantics)'

Syntax = the rules for manipulating the symbols [a computer program, for instance, is purely syntactic]
Semantics = what the symbols mean

-----------

'However, UTOR doesn't actually do that. Acknowledging that people have a certain world-view, that includes for example, physical ,matter and computers, it simply uses those concepts to get its point across.'

Right: but my point was that, if UTOR is right, the concepts aren't there in the first place - specifically, if we never see a program running . .(on a physical, causal, etc etc) . . . then where does our concept of a program come from? I suspect that this objection can be met, so please feel free to ignore it.

Moving on.

-------------------

'however, UTOR says that these appearances of transformation are accomadated by the non-temporal programs that make up consciousness.'

And you want me to generate an argument to the effect that this can't work. Well:

1. An appearance of transformation involves a perceived change. [i'm not saying that the change has to be there, but there has to be some kind of perception of change, even if illusory].
2. Any perceived change (say, between A and B) requires that the focus of attention alters somehow. This isn't very well put, so again I'll resort to examples: we say (trivially) that objects change over time, where what that means is that object X observed at time 1 is somehow different from object X observed at time 2. In order for there to be a perceived change, some 'base variable' ALSO has to change. So (e.g.) the width of a plank may change across the length of the plank; you have two values of 'width' and also two values of 'length'. Or the width of the plank may change over time. Um. I thought I'd be able to put this clearly, but it's a bit late, and I don't think I'm doing a very good job.

Kant is also a bit cryptic, but I think the quote makes sense:

'All determination in regard to time presupposes the existence of something permanent in perception. But this permanent something cannot be something in me, for the very reason that my existence in time can itself only be determined by this permanent something. It follows that the perception of this permanent existence is possible only through a thing outside me, and not through the mere representation of a thing outside me. Consequently, the determination of my existence in time is possible only through the existence of real things that I perceive outside me.'

And *that's* why conscious experience needs temporal entities to ground it.

------------

Re: the Chinese Room argument: the point of the Chinese Room is to show that syntax (i.e. the 'program', consisting of a set of rules for manipulating symbols) isn't sufficient to yield semantics (i.e. the meanings of the symbols, or an understanding of Chinese). But if you think that the system as a whole is conscious, then I'll grant that you're not going to buy the example. I personally think that the notion of the Room understanding Chinese is a bit weird, but I don't have time to push that intuition at the moment.

A similar argument is Putnam's: his example goes like this.

Suppose that you see an ant wandering around in the sand. By some strange coincidence, the ant just *happens* to trace a stick figure in the sand. The question, then, is this: does the stick figure represent anything? (i.e. is it meaningful, or merely random?) If you're inclined to think that it does represent something, then do you think that there's a difference between the picture as-drawn-by-an-ant-at-random, and a similar picture drawn by a person with the intent of representing another person?

Putnam's example is supposed to show that the formal stuff isn't sufficient for meaning. Particularly, in order to represent something you need to have encountered it: there needs to be some causal connection between you and whatever you're trying to represent, in words, song, paintings, etc. The pattern itself doesn't represent anything, intrinsically: a haphazard arrangement of twigs doesn't _mean_ anything, whilst an arrangement of twigs which was put there by someone _might_ mean something.

But, critically, minds are essentially representational: we can have thoughts *about* things. But if the formal bit (that is, the syntax) isn't sufficient for representation, and hence meaning, then it can't be sufficient for thought.

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
I guess you're saying that even the appearance of change requires actual change?

Good point; I'll have to get back to you on it after I've thought for a while (are you familiar with 'b-series vs. a-series models of time, ie, the view that the appearance of change can occur in a non-temporal system? Because fundamentally, that is the kind of scenario that UTOR is operating within)


Written by: quiet



Further problems (hang on: I've just remembered another killer point)

1. Programs are simply syntactic: they specify manipulations of symbols, such as A x B = B x A.
2. Syntax is independent of content.
but
3. The mind has content (that is, semantics)
hence
4. The program cannot describe the mind fully.
and therefore
5. The mind cannot be a mere program.






Would this not mean that the mind cannot be a running program? (just checking there because earlier you earlier seemed to be saying that mind was a running program)..

I guess my reply is that I see no reason why semantics can't be a result of a program.

ie manipulation of symbols results in semantics.

If the human mind is a product of brain states, that would seem to be an example of semantics arising out of what is esentially a program.

Similarly, when artificial intelligences arise, again that would be programs resulting in semantics.

so, to that extent, I'd say that semantics and syntactic programs are not mutually exclusive.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
A- vs B- series: yes, I'm familiar with it, but I got baffled by philosophy of time in my first year and gave up. But I thought that point of the A- vs B- series was that time implies change, and change implies the A-series. However, it's not true (is it? I might be wrong about this) that the B-series is non-temporal; events within the B-series still take place before and after others.

Would my syntax problem mean that the mind cannot be a running program? Well, it would mean that the mind cannot *just* be a running program. I think.

I'm suggesting that if you want to view the mind as a program, then you had better view it as a running program. But the view of 'mind as program' might not be right in the first place.

Now I'm *not* claiming that syntax and semantics (that is to say, form and content, or symbols and meaning, etc) are mutually exclusive: the claim is just that syntax isn't sufficient for semantics. Syntax can't result in semantics, since syntax is just set of rules for manipulating formal symbols (like "p, if p then q, so q"). It doesn't tell you anything about the content of these symbols: 'p' and 'q' could mean pretty much anything. The point here is that the mind isn't just a product of brain states: it's a product of brain states interacting with the world in a certain way. That's how our thoughts get their content.

You're right that 'when artificial intelligences arise, again that would be programs resulting in semantics' [at least on the current model of what an AI might look like]: but that's where the Chinese Room kicks in, and where all of this syntax / semantics stuff becomes relevant. If I'm right to think that syntax is insufficient for semantics (that is, if I'm right that specifying the program won't specify the content of the thoughts), then AI suddenly seems a lot further off. And the Turing Test won't suffice, since for the AI's utterances to be meaningful they not only need to have the right form (or syntax), but also the right content (and hence meaning).

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
I've been getting a vibe for the past few posts that your previous acceptance of AI: -



Written by: quiet





Granted, it doesn't matter what it's running on: it could be a bit of squishy grey matter, a complex computer, or a bit of swiss cheese if you could set the swiss cheese up appropriately.










is not as strong as I thought.



for example, your doubts about the 'chinese room' being conscious.



I believe that being at ease with the logical possibility of AI is probably a prerequisite to this approach to UTOR (as I say on the actual site)



Written by: from UTOR site





What is consciousness?

One of the assumptions that the theory uses is that consciousness is basically information/data- that 'mind' is fundamentally a program.



This will be easier to accept for those of you who believe that artificial intelligence is, in theory, possible; similarly for those who believe that mind is solely a function of the material brain.



For those who believe that artificial intelligence is not possible, or that consciousness is not merely a function of the brain, it will be harder to accept.










- if you have doubts about the possibility of AI, I don't think you'll find the rest of UTOR very convincing.



Written by: quiet



Now I'm *not* claiming that syntax and semantics (that is to say, form and content, or symbols and meaning, etc) are mutually exclusive: the claim is just that syntax isn't sufficient for semantics. Syntax can't result in semantics, since syntax is just set of rules for manipulating formal symbols (like "p, if p then q, so q"). It doesn't tell you anything about the content of these symbols: 'p' and 'q' could mean pretty much anything. The point here is that the mind isn't just a product of brain states: it's a product of brain states interacting with the world in a certain way. That's how our thoughts get their content.








All I can say here is that, IMO, syntax can result in semantics- that is down to my understanding of mind and such thought experiments as the chinese room.



If you can get hold of a copy of 'The Mind's 'I'' by hofstadter and dennett- their approach both to the chinese room and AI is pretty much what I concur with (as well as being an excellent all-round book with a very original approach; IMO the best book I encountered in my graduate years).



If you do want to persist with the view that syntax cannot result in semantics, then I will have to say that I just plain disagree; and ask if you can supply any arguments or reasons for your belief.



Written by: quiet





The point here is that the mind isn't just a product of brain states: it's a product of brain states interacting with the world in a certain way. That's how our thoughts get their content.










But, as far as the mind is concerned, the interaction of mind states with the external world, simply produces more mind states.



i.e. a perception of a physical object creates a mind state (the mental representation of the object), so, to that extent, everything that happens in consciousness is purely mind state interactions (some of which are admittedly caused by external factors).



Suppose we put someone in a virtual reality scenario- plug up his/her sense organs to a computer and simulate a reality.



Then, over time, one-by-one, cease to feed actual data in, and instead set-up feedback loops, so the data going into his/her mind is actually supplied by their own mind states.



At that point, their consciousness is entirely determined by their own mind states with no external influence.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
I'm not sceptical about the possibility of AI. I do, however, think that the programming would be insufficient: at some point, you've got to specify the semantics.



The reason why syntax is insufficient for semantics is very simple: syntax just specifies manipulations of symbols, and doesn't tell you anything about what they mean.



Put another way: syntax operates over symbols. But these symbols can have different interpretations. Hence syntax doesn't determine the appropriate interpretations, and hence doesn't specify content - that is, semantics. QED.



--------



Searle's has a response to the objection that the Chinese Room actually *does* understand Chinese:



Imagine the rulebook ‘internalised’ (that is, memorised by the man inside the room, so that the entirety of the input-output process takes place inside the man’s brain). Still the man would not understand Chinese, hence the system as a whole cannot suffice for understanding.



---------



Re: your VR case. I grant you, your example is sound, but it misses the point. I agree that ONCE you've specified the content (i.e. once you've set the semantics), then you don't need any external input in order for consciousness to carry on (well, arguably). My point is that you HAVE TO specify the CONTENT (you can do this and then disconnect the person from reality, but it still needs to be done) in order to have thoughts, and hence consciousness.



Look: you've asked me 'if i can supply any arguments or reasons for your belief' (about syntax and semantics). I've given them repeatedly. Please tell me why you're unconvinced.



One final attempt.



1. If syntax could ever be sufficient for semantics, then:

2. There would be one case (at least) where syntax determined semantics.

3. That is, there would be one case where the symbols over which the syntax was operating would have a determinate interpretation.

4. But if they have a determinate interpretation, then they are not symbols - because symbols, ex hypothesi, can stand in for more than one thing.

5. Hence syntax cannot ever be sufficient for semantics.



Or even:



1. Syntax (or programs) is (or are) 'multiply realizable'; you can have more than one system running the same program.

2. Hence any one program can operate over different contents (consider a program which takes 'p' and 'if p then q' and infers 'q': that specifies the program, and it can perform this operation whether 'p' means 'it is raining' or 'there is beer in the fridge')

3. But if syntax determined content, then the program in question could not operate over multiple contents (since there would be only one, determinate content for the program)

4. Hence programs wouldn't be multiply realizable, which is a contradiction of (1), hence, by reductio, one of the premisses is false.
EDITED_BY: quiet (1116848136)

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Many of the arguments around the 'chinese room' are deeply disorientating due to haphazard dismiss of the scale of the components.

ie the room and contents are a simplistic analogy of processes involving immense numbers of sub-processes.

When you talk about a man 'internalizing' the room and contents, that's a simple sounding scenario; but you're actually talking about internalizing a system which is complex enough to either be conscious, or at least behave in a manner sufficiently complex to simulate consciousness.

Obviously a human brain has suficient complexity to 'run' a 'consciousness' program; so imagine a man 'internalizing' the structure of the processes running in a different human brain- would he 'feel' that mind- would he be able to access the mental states of that mind?

Does that make you feel differently about 'internalising' the chinese room?

(if you can get hold of a copy of the 'Minds 'i'', it deals with all searles objections very well, IMO)

------------

The syntax/semantics line is something that I'm finding difficult to keep track of.

Firstly, can we scrap the 'symbol' terminology as, IMO, it's a little loaded.

And, before going further, can I just check that I understand what you're saying?

It sounds like you're saying that 'meaning' can not occur in a syntactic system without being imposed or inerpreted by an external source.

Is that right?

Secondly, can I clarify your use of 'syntactic'- does it basically refer to, for example, a circuit in a calculator, ie straightforward logical operations which happen to be enacted on a physical medium (circuits/chips).

ie the semantic content is there only because of the way we interpret the output (as numbers, for example).

is that what you mean by 'syntactic'?

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
ok:

1. I suggest we leave the Chinese Room issue to one side, for the time being. I think that Dennett has some very strong arguments on this score, and the debate could turn out to be very protracted; I also think that if you don't find Searle's arguments immediately convincing, then it'd take an awful lot of argument to make them convincing. I did read the book as an undergraduate, but I can't remember the details off-hand.

2. I think you've basically got a handle on what I mean when I invoke the syntax/semantics distinction.

I'd prefer to keep talking of symbols, since I can't very well think of a way of talking about syntax without them. So, to clarify:

i) Syntax just specifies operations which are to be carried out on inputs. The syntax is the program, rather than the physical circuit, although the circuit can implement the program. The AND gate is a syntactic operator: if and only if both the inputs have the value 1, the output will be 1.
ii) The problem is that the value '1' can mean virtually anything.
iii) Put another way: the syntax takes an input, and generates an output. But the input and output are symbols, things like 1/0 or 'p' or 'q'. And these symbols could mean anything. Unless you fix what they mean, you don't have meanings - that is, you don't have semantics.
iv) So the program isn't sufficient to generate meaningful outcomes (or thoughts, in the case of UTOR). The _content_ has to come in elsewhere.

Does that make sense?

I know I'm not managing to be desperately clear about this; apologies.

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
That does make sense, I'm now 99% sure that I understand what you mean with those terms.

A minor quibble here: -


Written by: quiet


iii) Put another way: the syntax takes an input, and generates an output. But the input and output are symbols, things like 1/0 or 'p' or 'q'. And these symbols could mean anything. Unless you fix what they mean, you don't have meanings - that is, you don't have semantics.





I would want to say that 'output and input are symbols' potentially could lead to problems- I'd say that the input and output are actually electrical signals, which we attribute meaning (and therfore symbol status) to.

--------------------

You seen to be saying that a program consisting of what is basically a sequence of logical operators cannot produce semantics/meaning in isolation.

My concern lies with potential confusion of 'meaning' when seen from the two different viewpoints of: -

1. observing such a system from without (eg, observing a calculator/computer, observing a human brain etc

2. being such a system i.e. feeling from within. In the case of a calculator or comtemporary pc, there is no such perspective, as they lack consciousness. In the case of a human brain, and, arguably, future AI's; there is such a perspective.

And this is where I believe that 'symbols, semantics, meaning' etc, create a lot of confusion.

Partly because, from perspective one, 'meaning' refers to something very different to any 'meaning' of the thoughts/feelings of the being observing itself.

There are, IMO, two other factors that are very relevant here: -

1. Complexity- as mentioned above, conscious systems may be based on the syntactic structure of a simple electronic calculator, but they are unimaginably more complex.

And i mean that literally- a human brain can understand how a calculator gets input, processes it and outputs the result.

In no way can a human mind understand how a brain can produce consciousness. I fairly certain that the current complexity of most computers is also beyond the comprehension of a human mind ie I don't think that anyone can fully grasp how a pc works in the way that is possible with simpler systems.

2. Feedback- any conscious system spends much of it's processing power observing itself.

imagine a relatively simple robot designed to map an area- it has to roam it, and store the details it 'observes'. We can, in principle, understand how it does this.

A consciousness literally maps itself- it is aware of internal processes (as 'feelings'). As I'm sure you know there is a qualitative difference between a mathematical function that takes straight input from an independant process, and one that iterates using its own output as input.

Such a process throws up patterns and structures that, though completely deterministic, are, even in principle, unpredictable (other than by running the process itself prior).

I guess that what I'm saying here is that, during that amazing process whereby consciousness appears, some very wierd stuff happens that is way, way too complex for us ever to be able to grasp fully.

and that that process is essentially fully syntactic, and if, as you claim, semantics cannot result from syntax; then consciousness viewed from within, does not require semantics.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
but look:

1. consciousness is a process
2. processes involve change
and
3. there is no change in the mathematical realm

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
It's not that simple.

UTOR is claiming that consciousness is a 'program' (not a 'running' program', and not a process)

Consciousness is a mathematical object like PI.

You are right that there is no change in the mathematical realm.

In philosophy, the debate around a and b-series time is about whether the timeline (ie our experiences which certainly seem to involve 'change') can be accounted for by a description which does not involve change.

ie is there a 'flow' of time? is there a moving 'now' point?

My conclusion was that all temporal experiences, including perceptions of events as past or future, flow of time etc; can be explained in terms of an unchanging ('b-series') scenario.

If that is the case, then your premises above-

'1. consciousness is a process
2. processes involve change'

do not hold, because consciousness may appear to be a process which involves change; but that appearance of change can be totally explained in terms of a changeless 'b-series' type timeline.

This is entirely compatible with the unchanging mathematical realm used in UTOR.

(I am fully aware of the danger of 'hijacking' common terms in philosophy (eg 'reality', 'change' etc), and leaving them empty of practical meaning.

With 'change' I fully acknowledge its use in describing our experiences in this world- what I am wanting to avoid is pulling that term out of its range of application by applying it to time as a whole- I believe that applying 'change' to the timeline as a whole is incorrect).

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


spiralxveteran
1,376 posts
Location: London, UK


Posted:
Have you read Julian Barbour's "The End Of Time"? He has constructed a physical theory in which time is a secondary product arising from a fundamentally timeless world?

https://www.oup.com/us/catalog/general/subject/Physics/?ci=0195117298&view=usa

It's a good book smile

"Moo," said the happy cow.


quietanalytic
503 posts
Location: bristol


Posted:
simple question: if consciousness is a program, what's the 'running program'?

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
'running' is to distinguish a program as pure algorithm (non running), and the program as actually enacted on hardware (running).

Possibly the same as what you meant by 'instantiated'.

Take a very simple program like-

a=a+1

which simply increments 'a'

that mathematical entity can be enacted on a wide variety of different hardware, in a wide variety of different ways- each of which is simply a representation of the single mathematical entity.

These enacted representations must take place in a flow of time, whereas the mathematical entity does not take place in a flow of time.

UTOR describes consciousness, not as a 'running' or enacted program, but as program in the form of pure mathematical entity.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


Bender_the_OffenderGOLD Member
still can't believe it's not butter
6,978 posts
Location: Melbourne, Australia


Posted:
forty-two!

Laugh Often, Smile Much, Post lolcats Always


quietanalytic
503 posts
Location: bristol


Posted:
right, but that's not my question:

1. if consciousness is a program, what's the 'running program'?
2. so you think that consciousness is the software, and then there's something else when you have 'enacted representations'.
3. so what is it?

i just can't shake the feeling that people are conscious; that consciousness goes away when you, for instance, go into a coma; and that we should therefore treat consciousness as a 'running program', and hence *not* an 'unchanging mathematical entity'

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Let's forget the 'running' program; consciousness is not a running program.



I'm confused about what you're confused about, and why you keep asking about the running program; I only brought it up cos of .a distinction you seemed to make between instantiated and non-instantiated programs.



Let me just say what consciousness is (according to UTOR)- it's a non-temporal mathematical object.



Conscious experience certainly includes the appearance of change, yet that is perfectly compatible with it being an unchanging mathatical entity.



Take the normal view of consciousness as a line of experiences which become 'active' in turn as the 'present' or 'now-point' occurs.



Prior to the 'now-point' is a collection of 'past' experiences; at the now-point is the present experience; up in front are the future experiences.



That view requires 'change', in the form of a temporal flow (the moving now-point); however, note that the 'instants of awareness' that make up the actual line of consciousness do no change (other than to become 'active' when the now-point reaches them).



UTOR posits the exact same collection of instants of awareness; however, it says there is no 'activation' according to the passing of some hypothetical 'now-point'- this is not possible because they are in a timeless and changeless realm.



Instead, each instant is continuosly active- there is no 'change'.



However, being exactly the same set of instants as the ones in the normal view of consciousness (the one with the 'now-point')- the experience is the same; for example, there is the appearance of change.



A consequence of this is that any appearance of change in the normal view of consciousness (the one with the now-point) could not have been down to the presence of actual change of the moving now point.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


spiralxveteran
1,376 posts
Location: London, UK


Posted:
Written by: onewheeldave


Let's forget the 'running' program; consciousness is not a running program.



Why? And if it isn't, then why can't you run it?

"Moo," said the happy cow.


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
What I should have said is that according to UTOR, consciousness is not a running program.

As to why; it's because (according to UTOR) the 'program' of consciousness is a mathematical object that exists in a non-temporal (unchanging, no time flow)mathematical realm- as such it can't be 'run' as running involves time and change.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


quietanalytic
503 posts
Location: bristol


Posted:
Ah, now I think we're getting somewhere. Let me check - you think that:

1. Consciousness is a program [and hence exists as a mathematical object].
but
2. This program cannot be 'run'.

and now I'm baffled, for two separate reasons. The first is a minor problem:

i) What is a program, other than something which specifies a *process* - i.e. something that can be 'run'?

And the second is a bigger one:

ii) You talk of the 'now-point' moving in the atemporal mathematical realm. But motion itself requires time: 'movement' is 'displacement / time', or 'change of displacement', etc: and, ex hypothesi, the mathematical realm is atemporal. So the 'now-point' cannot move around in the mathematical realm.

ture na sig


ben-ja-menGOLD Member
just lost .... evil init
2,474 posts
Location: Adelaide, Australia


Posted:
quiet think of a curved surface, every point on that surface is defined, now if u place a ball on that surface at some random point and let it go the ball will roll however every point that the ball travels through can be calculated based on some simple physics and the geometry of the surface. the surface doesnt change however from the balls perspective it is changing.

Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light, not our darkness that most frightens us. We ask ourself, who am I to be brilliant, gorgeous and talented? Who are you NOT to be?


quietanalytic
503 posts
Location: bristol


Posted:
argh, no no no no you miss the point:

I fully understand that you can explain an appearance of change by talking about a changing perspective on an unchanging domain (surface, line, etc.)

But that's not the problem: the problem is that IN ORDER FOR THE PERSPECTIVE TO CHANGE (or move, etc), you need to talk about TIME. Hence you can't even get the *appearance* of change in a timeless, unchanging realm.

ture na sig


spiralxveteran
1,376 posts
Location: London, UK


Posted:
Actually consider a line drawn through a configuration space (such as a 6-dimensional space specifying all possible positions and velocities of a particle in a 3D space) - there is no time in this space, but you would consider it the history of a particle moving through space... but in terms of the configuration space there is no such time, just a static line.

"Moo," said the happy cow.


quietanalytic
503 posts
Location: bristol


Posted:
sure - and that wouldn't be enough to give the appearance of change . . .

ture na sig


onewheeldaveGOLD Member
Carpal \'Tunnel
3,252 posts
Location: sheffield, United Kingdom


Posted:
Quiet- your objections are good, and maybe it's time to re-write UTOR again to take account of the points you've raised.

For now I'll focus on your question of how consciousness and apparent change can be accounted for in the non-temoral environment of the mathematical realm.

Imagine every instant of your consciousness, from your birth to your death, laid out from left to right along a time-line.

A moving 'now-point' (the present) visits each instant in sequence and 'activates' it; representing your 'present' experience.

The now-point is moving, therefore change and temporality are necessary.

Now lets switch to the UTOR account- the instants of your life are in a collection in the timeless realm of the mathematical world; each and every one is permanently 'acitivated'- there is no moving now-point, no time-line, no time and no change.

"You can't outrun Death forever.
But you can make the Bastard work for it."

--MAJOR KORGO KORGAR,
"Last of The Lancers"
AFC 32


Educate your self in the Hazards of Fire Breathing STAY SAFE!


Page: ...

Similar Topics

Using the keywords [ultimate theory reality] we found the following existing topics.

  1. Forums > The Ultimate Theory of Reality. [236 replies]

      Show more..

HOP Newsletter

Sign up to get the latest on sales, new releases and more...