PDA

View Full Version : Factor



danbaron
25-01-2011, 08:39
Maybe I can understand a little bit of Factor. --> http://www.factorcode.org/

Here is a quote from, http://concatenative.org/wiki/view/Concatenative%20language/Name%20code%20not%20values .


Here is another dataflow pattern:

var x = ...;
bar(x,foo(x));

In Factor, we call this dup. If the value x is at the top of the stack,

dup foo bar

applies foo to x, then applies bar to x and the result from foo. Factor calls the "normal" languages we are used to, "applicative" languages.

Above, first it shows how the pattern is performed in an applicative language. A value is assigned to x, then, the function, "bar", is applied to the two parameters, x, and foo(x).

In Factor, instead, you put x on the stack, duplicate it (dup), so that x is at both stack level 1, and stack level 2. Then, "word" (function), "foo", is applied to x at stack level 1, and the result is substituted for x at stack level 1. Then, "bar" is applied to foo(x) at level 1, and to x at level 2. (I think in this case, the word, bar, must be constructed, so that it expects foo(x) at level 1, and x at level 2. Otherwise, I think instead, the Factor implementation would be,


dup foo swap bar

("swap", swaps the contents of stack level 1 and stack level 2.)

But, how do you put x on the stack?

As far as I know, you put its value on the stack, i.e., if x equals 3, then,


3 dup foo bar

------------------------------------------------------------------------------------------

I swear it's hard to figure out how to do the simplest things in Factor.

So far, I have no idea how to assign a value to a variable.

The Factor website doesn't show how to do much.

And even though the help system that comes with Factor is big, to me, it's like trying to learn a human language by looking at a dictionary.

Here is another quote from, http://concatenative.org/wiki/view/Concatenative%20language/Name%20code%20not%20values


Concatenative language/Name code not values

In a stack language, your program is a sequence of words and literals -- literals are pushed on the stack when
encountered by the evaluator, and words are either primitive or they are subroutines, themselves sequences of words and
literals.

Values which are passed between words -- "parameters" -- are not named, and are not referenced directly by the language.
Your values can be as rich and complex as you want: Factor supports a rich set of collections (arrays, hashtables, etc.)
as well as user-defined classes with named slots. However the values themselves are not named. Just like in a language
like Lisp or Java, in Factor we still name the following:

* Words (functions)
* Classes
* Slots (instance variables of classes)
* Global variables
* ... and more

However, we don't name parameters to words. I don't know how to define and set a global variable.

So, far, I am only able to push values onto the stack, and operate on them there.

After I have operated on a stack value, how do I save it to be used later?

Don't ask me.

I'm not saying there is anything wrong with Factor, or that I am giving up on it.

I am saying that trying to understand it from the available documentation, is a real exercise in frustration.

Maybe your experience will be better.

:mad:

danbaron
26-01-2011, 09:18
I found out how to set a variable.


SYMBOL: x
2 x set
x get

Before you can use a variable, you have to define it as a symbol. From inside the IDE, when you type, "SYMBOL: x", and press ENTER, the data stack is not altered, but x has become a symbol. According to the, "Dynamic variables cookbook", inside the IDE help system, "A symbol is a word which pushes itself on the stack when executed.".

When you type, "2 x set", and press ENTER, 2 is pushed onto the stack, x is pushed onto the stack, then "set" is executed, and 2 and x, are popped off of the stack.

When you type, "x get", and press ENTER, 2 appears on the top of the stack.

Of the three lines of code, only Line 1 has to be entered in its entirety. Lines 2 and 3 can be entered piece by piece. For instance, for Line 2, you could first enter 2, then enter x, then enter "set". Before entering "set", you will see x and 2 in the first two stack levels. When you enter "set", x and 2 will be popped off of the stack.

I still don't understand the distinction between dynamic and global variables. Additionally, there are namespaces, and, I don't know how to manipulate them.

:eek:

Charles Pegge
26-01-2011, 12:00
Factor is intriguing but I see it as a sort of abstract machine language which you might use to build other programming languages. It might benefit from using more descriptive words and less symbols. And one major weakness of using stacks for almost everything is that you need to know how many parameters are taken by each function. If it is not explicitly indicated then the language is very hard to decipher.

Charles

danbaron
26-01-2011, 22:37
I happened to see this last night, Charles.

I was lucky to be able to find it again today.

I think it tries to address your point about "rest" parameters.

http://concatenative.org/wiki/view/Concatenative%20language/Rest%20parameters

Dan

danbaron
27-01-2011, 07:53
SYMBOL: x
2 x set
x get
I said above, that of these three lines of code, Line 1 is an exception. Each of the other two lines can be entered as shown (all at once), or, they can be entered one token at a time, like, for Line 2, "2 ENTER", "x ENTER", "set ENTER".

Line 1 must be entered all at once, "SYMBOL: x".

If you try to just enter, "SYMBOL:", you get an error. The error is, that you are trying to make a word out of the name, "f". In Factor, f is the boolean, false. So, I guess the null value (empty string) is equivalent to f.

If you try to just enter, "x", you also get an error. The error is that the word x is not found in the current vocabulary search path.

So, in Factor, you cannot just push anything onto the (data) stack. The language analyzes whatever you try to push. If it cannot determine what it is that you are trying to push, it generates an error.

The result is, that Line 1 must be entered all at once. Is that an inconsistency/exception/compromise in the language? I would say it is. On the other hand, so far, I have never seen a language without them.

:eek:

Charles Pegge
27-01-2011, 11:23
I guess there is a symbolic name stack coupled with the main stack - normally left empty. But allowing positions in the stack to be identified by name.

Charles

danbaron
27-01-2011, 13:58
Maybe this is what you mean, maybe not. - Dan

"So what makes stack languages different? The key concept here is that there are multiple stacks: all stack languages have a call stack to support recursion, but they also have a data stack (sometimes called an operand stack) to pass values between functions. The latter is what stack language programmers mean when they talk about "the" stack."

From:

http://concatenative.org/wiki/view/Concatenative%20language

Charles Pegge
27-01-2011, 17:04
I would envisage a suitable stack like this:



type celltype
typ as long 'validity and type
nam as string 'optional symbolic name
var as variant 'value
end type

dim as celltype cell,stack(256)

dim as long spt=256 'stack pointer


Variants are very useful for holding data of any common type, but for the purposes of combining strings and numbers, you still need to have the typ field to indicate which kind of data has been stored.

Charles

Charles Pegge
28-01-2011, 19:22
Joy, a functional language



Abstract: Joy is a high-level purely functional programming language which is not based on the application of functions but on the composition of functions. This paper gives a rationale for Joy by contrasting it with with other paradigms of functional languages. Joy differs from lambda calculus languages in that it has no variables and hence no abstraction. It differs from the combinatory calculus in that it does not use application. It differs from the categorical languages in uniformly using an untyped stack as the argument and value of the composed functions. One of the datatypes is that of programs, and the language makes extensive use of this, more than other reflective languages. The paper gives practical and theoretical introductions to various aspects of the language.


Much of this paper goes over my head - the discussion is far more complicated than the thing being discussed! But here we have a language (previous generation to Factor) which has no variables and no control structures that we would normally recognise and has an extremely simple syntax.

Programming is achieved by specifying sequences and combinators. Functions are sequences, data is sequences and the stack is also a seqence. Like Lisp, sequences are either quoted or evaluated.

Charles

http://www.latrobe.edu.au/philosophy/phimvt/joy/j00rat.html

danbaron
28-01-2011, 22:48
I don't have much time now.

I'll have to look at that article later.

"the discussion is far more complicated than the thing being discussed!"

I like that quote. How often is it true concerning many topics, maybe especially with respect to anything concerning politics, economics, and justifications for wars?

I read about Joy, but assumed that Factor is an improved version of it, from Slava Pestov's quote, -->

Why do we need a new stack-based language?
Because the other ones aren't suitable for high-level development. Forth (http://concatenative.org/wiki/view/Forth) is great for low-level things, but its lack of type system and garbage collection make it difficult to debug programs, and it doesn't mesh well with functional programming. Joy (http://concatenative.org/wiki/view/Joy) made a very important theoretical contribution, but it is difficult to compile efficiently, its syntax is inextensible and it has an insufficient module system. Additionally, it is almost purely functional, making many things difficult. Factor combines the best aspects of these two systems, together many other borrowings from various places.

http://concatenative.org/wiki/view/Factor/FAQ/Why%3F

Your quote from the paper, and what you wrote, gives me the idea that, Joy, comes close to consisting of nothing at all. Maybe that is the ultimate goal of some theorists, a language which is impossible to criticize for what it contains, because, it doesn't contain anything!

:p

danbaron
30-01-2011, 10:31
---------------------------------------------------------------

Here is an example of a word definition, which uses named lexical (local) parameters. It comes from the Factor help browser, under the, "Article", "Examples of Lexical Variables". It finds the two roots of any quadratic equation,

ax^2 + bx + c = 0

---------------------------------------------------------------

USING: locals math math.functions kernel ;
IN: quadratic
:: quadratic-roots ( a b c -- x y )
b sq 4 a c * * - sqrt :> disc
b neg disc [ + ] [ - ] 2bi [ 2 a * / ] bi@ ;

---------------------------------------------------------------

As far as I can tell, "USING:", "IN:", and, ";", are built into the language.

";", marks the end of a definition.

Notice, that in Factor, spaces delimit words. For instance, above, I think it would have been an error to put, "kernel;".

---------------------------------------------------------------

"USING: locals math ...", means that the following word definitions use words from the listed vocabularies.

---------------------------------------------------------------

"IN: quadratic", means that the following word definition(s) will be found in the vocabulary, "quadratic".

---------------------------------------------------------------

You need the vocabulary, "locals", in order to be able to use the words, "::", and, ":>".

You need the vocabulary, "math", in order to be able to use the arithmetic words (+ - * /), and also, "neg".

You need the vocabulary, "math.functions", in order to be able to use the word, "sqrt".

You need the vocabulary, "kernel", in order to be able to use the words, "2bi", and, "bi@".

---------------------------------------------------------------

"::", "Defines a word with named inputs. The word binds its input values to lexical variables from left to right, then executes the body with those bindings in scope." (from the Factor documentation).

---------------------------------------------------------------

"quadratic-roots", is the name of the word being defined.

---------------------------------------------------------------

(I'll show the definition again.)

USING: locals math math.functions kernel ;
IN: quadratic
:: quadratic-roots ( a b c -- x y )
b sq 4 a c * * - sqrt :> disc
b neg disc [ + ] [ - ] 2bi [ 2 a * / ] bi@ ;

---------------------------------------------------------------

"(a b c -- x y)", is the "stack effect declaration". I think that every word definition must have one. I think they indicate how many parameters (inputs) a word pops from the stack, and, how many parameters (outputs) the word pushes onto the stack. I think the names used in the declarations, are arbitrary. Apparently, "--", delimits the "before" parameters, from the "after" parameters.

---------------------------------------------------------------

When the word, "quadratic-roots", is invoked, three values are popped off the stack. Within the word definition, they are called, "a", "b", and "c".

---------------------------------------------------------------

Now to the word's ("quadratic-roots") code, - the first line.

b sq 4 a c * * - sqrt :> disc

First, b is pushed onto the stack, and then squared ("sq"). Next, 4, a, and c, are pushed onto the stack. Next, c and a are multiplied (word, "*") together, giving, ca. Next, ca is multiplied ("*") by 4, giving, 4ac. Next, 4ac is subtracted ("-") from b^2. Next, word, ":>", binds the quantity (b^2 - 4ac) to local variable, "disc" (discriminant).

---------------------------------------------------------------

Now to the second line of the word's code.

b neg disc [ + ] [ - ] 2bi [ 2 a * / ] bi@ ;

First, b is pushed onto the stack. Next, -b is substituted for b ("neg"). Next, "disc", is pushed onto the stack.

What appears next, "[ + ]", is called a quotation. From the Factor documentation,

"A quotation is an anonymous function (a value denoting a snippet of code) which can be used as a value and called using the Fundamental combinators.
Quotation literals appearing in source code are delimited by square brackets, for example [ 2 + ]; see Quotation syntax for details on their syntax."

Anything that appears inside square brackets, is a quotation. When a quotation is pushed onto the stack, it is not evaluated (but, it is analyzed; if it contains a word which Factor does not recognize, an error will be generated). Some Factor words operate on quotations. When one of these words is invoked, the code inside a quotation's brackets is executed.

So, next, the two quotations, "[ + ]", and, [ - ], are pushed onto the stack.

The word, "2bi", has the stack effect declaration, "( x y p q -- )". x and y are values, and p and q are quotations.

"2bi", applies p to x and y, and then applies q to x and y. So here, first, "+", is applied to -b and disc (-b and disc are added), and then, "-", is applied to -b and disc (disc is subtracted from -b). Now, level 1 of the stack contains the value, "-b - disc", and level 2 of the stack contains the value, "-b + disc".

Next, the quotation, "[ 2 a * / ]", is pushed onto the stack.

The word, "bi@", has the stack effect declaration, "( x y quot )". x and y are values, and quot, is a quotation.

"bi@", applies quot to x, and then applies quot to y. So here, first, "2 a * /", is applied to, "-b + disc" (stack level 2), and then, "2 a * /", is applied to, "-b - disc" (stack level 1). The final stack configuration has the value, "(-b + disc)/2a", at stack level 2, and, "(-b - disc)/2a", at stack level 1.

";", marks the end of, word, "quadratic-roots".

---------------------------------------------------------------

In order to invoke (call) the word, "quadratic-roots", from within the Factor Listener, first, you have to make a folder called, "quadratic", and put it inside the Factor folder called, "work". Inside that folder, you must write and save (using your editor) the "quadratic-roots" definition, inside a file called, "quadratic.factor". Then, at the Factor Listener prompt, you can type and enter, "USE: quadratic". Now, you can test the function (word). If you type and enter, "1 -3 2 quadratic-roots", the data stack will display, "1 2", which are in fact, the correct roots. The function also works for complex roots.

I think the vocabulary you have defined, "quadratic", corresponds to a program in other languages. I guess most user-defined vocabularies (programs) consist of multiple word definitions. One word can be labeled, "MAIN:". It will be executed when the vocabulary is passed to the word, "run".

---------------------------------------------------------------

Wow, it seems like a lot of work to find the roots of a quadratic equation, doesn't it?

Maybe this explanation is so long, because this is all new, yes?

So far, I haven't seen what the payoff is for learning this new way of doing things.

Probably, it is coming, correct?

---------------------------------------------------------------

From Charles' post about Joy, I looked at its home page,

http://www.latrobe.edu.au/philosophy/phimvt/joy.html

Its creator, Manfred von Thun, is a retired philosophy professor.

He has a page devoted to computing quadratic roots, using Joy,

http://www.latrobe.edu.au/philosophy/phimvt/joy/jp-quadratic.html

Apparently, problems of this type are a nemesis for stack-based languages. Maybe that is why Slava Pestov showed how to do it in Factor.

I can begin to understand why the paper which Charles' quote about Joy, comes from, is so dense and difficult. My experience is that this is exactly the way philosphy books are. Philosophers seem to delight in writing page after page of detailed minute tedium. Interestingly, often philosophers "prove" things. From my perspective, they never prove anything. Because, all proofs begin with one or more assumptions. No proof begins with nothing, at zero. But, I don't intend to bash philosophers. The successful ones are very smart. I like to look at their books, and, I am interested in von Thun's stuff, which there is a lot of. But, when I read that kind of material, I get quickly fatigued. I am somewhat fascinated by people who revel in expounding endlessly about such microscopic detail, but, I don't understand them. I think there is an amazing variation in human brains.

(Now that I think about it, I think all philosophic proofs are in the form of, "IF-THEN". If you believe the, "IF", then, they'll prove the, "THEN". But, I'm not sure they are able to do it. It's not mathematics, what objective way is there to show that the, "THEN", does what it claims to do?)

:o

Charles Pegge
31-01-2011, 07:55
In the 70s I came across a philosophy book advertised in the Whole Earth Catalog. (unusual for an academic work!) It was called Laws of Form by G Spencer-Brown. It is a short work exploring the primal basis of Logic. I could not find a copy on the Web but the first half of the Wikipedia article covers it well then muddies the clear waters.

I can't represent Spencer-Brown's notation here but perhaps a pair of empty brackets will suffice instead. () :)

This work is so fundamental - I like to think of it as the philosophy of digital technology - and possibly the key to cognition and consciousness itself.

Charles


http://en.wikipedia.org/wiki/Laws_of_Form
http://www.lawsofform.org/ideas.html

http://en.wikipedia.org/wiki/G._Spencer-Brown

danbaron
31-01-2011, 08:59
((A)A) =

A pdf file of the scanned pages of the prefaces and introduction can be downloaded from here.

http://pdfdatabase.com/download/spencer-brown-laws-of-form-pdf-3409340.html

You have to rotate it 90 degrees to read it.

In the 1979 preface, at first I thought he was writing about his own, "untimely death in 1976". Then, I realized, he had specified, D J Spencer-Brown.

And, you can get a used (expensive) copy of the book from Amazon.

http://www.amazon.com/Laws-Form-G-Spencer-Brown/dp/0963989901

:o

LanceGary
31-01-2011, 09:02
---------------------------------------------------------------

I can begin to understand why the paper which Charles' quote about Joy, comes from, is so dense and difficult. My experience is that this is exactly the way philosphy books are. Philosophers seem to delight in writing page after page of detailed minute tedium. Interestingly, often philosophers "prove" things. From my perspective, they never prove anything. Because, all proofs begin with one or more assumptions. No proof begins with nothing, at zero. But, I don't intend to bash philosophers. The successful ones are very smart. I like to look at their books, and, I am interested in von Thun's stuff, which there is a lot of. But, when I read that kind of material, I get quickly fatigued. I am somewhat fascinated by people who revel in expounding endlessly about such microscopic detail, but, I don't understand them. I think there is an amazing variation in human brains.

(Now that I think about it, I think all philosophic proofs are in the form of, "IF-THEN". If you believe the, "IF", then, they'll prove the, "THEN". But, I'm not sure they are able to do it. It's not mathematics, what objective way is there to show that the, "THEN", does what it claims to do?)

:o


Actually science is no better than philosophy. David Hume long ago pointed out the weakness of induction - the fact that something has happened constantly in the past is no proof that it will continue to happen in the future. Popper tried to answer this by suggesting that scientists actually actually put forward a deductive argument in which a hypothesis is tested. So the scientist would say, "if ... [some hypothesis] then ... [some conclusion]. But again the clause following the If part depends upon all sorts of assumptions, and everything turns on whether you believe those assumptions. So both science and philosophy turn on deductive arguments and both require that you accept the truth of some premises, and both cases not all the premises can be demonstrated beyond question.

Lewis Caroll (see http://www.ditext.com/carroll/tortoise.html) showed that logical inferences are not automatic, they are not necessities. They are in fact acts of thinking that we choose to make. So not only must we take some premises on faith, but we must also choose to draw inferences (and can sometimes refuse to do so) in order to reach our conclusions.

Lance

danbaron
31-01-2011, 10:24
When Hawking's and Mlodinow's latest book came out, I read a criticism of it from a philosopher Catholic priest, Robert J. Spitzer. (As an aside, here is his latest book.)

http://www.amazon.com/New-Proofs-Existence-God-Contributions/dp/0802863833/ref=sr_1_1?s=books&ie=UTF8&qid=1296460070&sr=1-1

I want God to exist, but, I don't like deception.

It seems to me that only one proof of God's existence would be necessary, yes or no?

Anyway, then I looked at Spitzer's web site:, -->

http://www.magisreasonfaith.org/

He has a column there, called, "Ask Fr. Spitzer". I guess, mostly young people, ask him to expound on God related questions.

In one of his answers, he was attempting to provide a proof of his view. One of the axioms he based it on was, "From nothing, only nothing can come.".

He lost my attention there. To me, that, "axiom", is not more than a human observation, which has become part of human intuition.

Similarly, as far as I know, there are no physical "laws". There are only observations of what has happened in the past.

So, in a certain sense, I think most philosophical proofs, are motivated by the drive for ego gratification (and material security, i.e., tenure). Century after century philosophers argue about God's existence. Today, they argue about the idea of free will. To me, they fight over issues which they know, are and most likely will remain, humanly indeterminate. That causes me to believe, that their primary motives for doing so, are much more personal than altruistic.

(Just for fun, I can give my little, "proof", that we do not have free will (but, I hope I am wrong). Here it is. --> Science knows about two types of physical processes, "deterministic" (e.g., planet orbits), and, "random" (e.g., radioactive decay). Assuming there is no third unknown type, then, the operation of the human brain must be some combination of the two. Any such combination precludes the possibility of free will.)

:mad::p:mad:

LanceGary
31-01-2011, 10:48
When Hawking's and Mlodinow's latest book came out, I read a criticism of it from a philosopher Catholic priest, Robert J. Spitzer. (As an aside, here is his latest book.)

http://www.amazon.com/New-Proofs-Existence-God-Contributions/dp/0802863833/ref=sr_1_1?s=books&ie=UTF8&qid=1296460070&sr=1-1

I want God to exist, but, I don't like deception.

It seems to me that only one proof of God's existence would be necessary, yes or no?

Anyway, then I looked at Spitzer's web site:, -->

http://www.magisreasonfaith.org/

He has a column there, called, "Ask Fr. Spitzer". I guess, mostly young people, ask him to expound on God related questions.

In one of his answers, he was attempting to provide a proof of his view. One of the axioms he based it on was, "From nothing, only nothing can come.".

He lost my attention there. To me, that, "axiom", is not more than a human observation, which has become part of human intuition.

Similarly, as far as I know, there are no physical "laws". There are only observations of what has happened in the past.

So, in a certain sense, I think most philosophical proofs, are motivated by the drive for ego gratification (and material security, i.e., tenure). Century after century philosophers argue about God's existence. Today, they argue about the idea of free will. To me, they fight over issues which they know, are and most likely will remain, humanly indeterminate. That causes me to believe, that their primary motives for doing so, are much more personal than altruistic.

(Just for fun, I can give my little, "proof", that we do not have free will (but, I hope I am wrong). Here it is. --> Science knows about two types of physical processes, "determinate" (e.g., planet orbits), and, "random" (e.g., radioactive decay). Assuming there is no third unknown type, then, the operation of the human brain must be some combination of the two. Any such combination precludes the possibility of free will.)

:mad::p:mad:





Why base your argument on unconvincing proofs? You forget that all of science grew out of philosophy. The original arguments that led to the development of physics, bilogy, medicine, were all philosophical, and originally were all deductive. In your little proof about free will at the end you assume the authority of science. But science would not exist were it not for the activities and "proofs" of philosophers. It is quite possible that new and better arguments will be found and that from them new sciences may eventually arise.

Lance

Charles Pegge
31-01-2011, 23:55
Comparing Boolean with Spencer-Brown -Laws of Form.



Boolean Laws-of-Form

a a
b b
not a (a)
a or b a b
a and b ( (a) (b) )
a xor b ( ( (a) (b) ) ( a b ) )


Issues of Free Will we might need to discuss in another thread some time.




Professor V. S. Ramachandran's wonderful Reith Lectures -- like his equally wonderful book Phantoms in the Brain, co-authored with Blakeslee -- gave many telling examples, from clinical and experimental neuroscience, showing how the brain functions as a `committee' of multifarious parts. Then, in the final lecture broadcast yesterday (30 April 2003), Ramachandran returned to the problem of why, despite that multifariousness, every normal human being seems to have a strong sense of his or her unique `self'[/quote

...


2.7 Consciousness and free will

The biological need to grasp space and time together includes the need to coordinate internal decisions with external events. So, again not surprisingly, there is a second kind of acausality illusion, which concerns the perceived time of taking a decision to act. This point has been overlooked in some of the debates about consciousness and free will.

Perhaps the most striking example, with the clearest experimental evidence, is the acausality illusion evoked in the slide-projector experiment of Grey Walter. This was first described in 1963 in an unpublished report. Neurosurgical patients were invited to entertain themselves to a slide show by pressing a button to advance the slide projector, at times of their choosing. But the projector was wired not to the button but directly to a certain part of the patient's motor cortex; and the subjective effect -- startling and disconcerting to the patients, who must have wondered whether they were going crazy -- was that the projector seemed to behave acausally, to anticipate their decisions. The projector seemed to advance itself just before they decided or, rather, perceived themselves as deciding, to press the button.

http://www.atm.damtp.cam.ac.uk/people/mem/reith.html
http://www.bbc.co.uk/radio4/reith2003/

Charles

danbaron
01-02-2011, 08:27
I've been gone all day, I couldn't look at the new posts until now.

I guess I should have used the word, "deterministic", not, "determinate", in my so-called proof.

Anyway, I agree with you Lance. I guess, I based my "proof", on science, because, in the age we live in, science is assumed to be infallible. In fact, I have never heard the idea proposed, that science has ever been wrong about anything. My spontaneous definition of science is, "the constant (usually), verifiable, and reproducible patterns, of the reality we inhabit". I think when we try to understand those patterns, we are studying it (science). I think that according to that definition, science cannot be wrong, because it is equivalent to reality. Of course, scientific theories can be wrong. Or, they can be thought to be exact, and later determined to be only approximations.

It could be true, that there is something higher than science, which provides a deeper understanding of reality and existence. It could be that it will be found through the work of philosophers. But, as far as I know, that "something", is, as of now, unknown. So, if I don't base a proof on science, then, it seems to me that the only argument I have left, is my strong feeling, my intuitive conviction. Probably, to most people, a scientific argument would be more convincing, than me saying, "I can't explain why, but I am convinced that we do (or do not) have free will, and therefore you should be convinced too.".

I guess that philosophy is based more on introspection, on thinking about ideas. Luckily for us, the human brain is constructed so that the thoughts we think, often are directly related to the reality we inhabit. Similarly, the mathematics our brains are able to discover, also is directly correlated to our reality; and that in part, enables us to understand science, which most believe, is the best model we have, of reality. I think that people trust science, because, scientific hypotheses/theories can be tested and either proved or disproved (because, patterns exist, and are relatively constant). Philosophic ideas may potentially be more valuable, but, it seems there is no way to test them. So far, I guess they first must be translated into scientific theories, and then the theories can be tested.

Like I said, it could be that something higher than science exists. Some people would speculate that the spiritual is that something. But, probably, most people can't imagine that there is anything above science. However, I bet most people also think that the computer is the ultimate machine. They think that computers can be improved, but, can never be replaced with something completely different, which will cause future people to view computers as not more than relics. But, who knows?

:p

danbaron
01-02-2011, 09:44
I'm glad the lectures are in print, Charles. I like to read, rather than to watch, and/or, listen.

I wonder whether or not McIntyre is intentionally expressing himself in a way which is difficult to understand. I guess he is saying that the feeling of self and of having free will are illusions of brain functioning. The brain evolved and functions automatically to maximize the probability of survival. I guess, in that case, I would ask, then, what is the purpose of the conscious mind? It would seem to me to be superfluous.

I should state my wish - I want free will to be real.

It doesn't trouble me that we make decisions prior to when we become aware that we have made them. I already think that the mind is about 90% subconscious, and about 10% conscious. If my subconscious makes the decision, then, it has decided. If it has decided, to me, that implies free will.

I've said it before, and I'll say it again, I don't know how free will could be possible.

Philosophically, to me, it corresponds to an uncaused cause, or a first cause - God is often referred to similarly.

I also don't understand how it could be mechanically implemented. If you think of the human brain as a computer, then, as far as I know, it is composed of electrical signals, memory locations, connections, and switches. I don't understand how any of those could decide anything.

I've said this before too. I think we have difficulty understanding the human brain, because, we have no alternative but to use the human brain to try to understand it. To me, it's like a microscope examining another microscope, and trying to understand the functioning of a microscope. And, here's my little proposition that goes along with - "No mechanism can fully understand the functioning of itself.".

:mad::p:mad:

Charles Pegge
02-02-2011, 14:43
I wonder when Google will be considered as having consciousness. :) It has a formidable memory and sophisticated algorithms for evaluating information. It has pretty good survival instincts too, spreading itself across thousands of servers.

Coming back to Factor, I think it too cryptic to succeed as a language. Dan asks where is the payback in using this system. I can't see any myself yet I am sure there are some significant concepts in there which would benefit other languages.

For instance placing whole lists (sequences or aggregates) on a processing stack is an idea that naturally lends itself to parallel programming.


Charles

danbaron
02-02-2011, 20:57
Issues of Free Will we might need to discuss in another thread some time.I don't want to abuse the thinBasic forum. Maybe that is partially what Charles meant. I know that I have a tendency to stray from the subject of a thread. I am absolutely willing to, "self-monitor". In other words, in cases when I post something which I know is off-topic, to make it only temporary, i.e., to delete it after a day or two.

--------------------------------------------------------

Here is my recollection of the idea of the Turing Test (updated for the technology we have today). A person types into a computer, and has a conversation with an unknown "someone", also typing into a computer. If the person finds it impossible to decide whether he is communicating with another person or a machine, then, in the cases when the other conversant is a machine, the machine is judged to be conscious. In other words, Alan Turing circumvented the difficulty of trying to separate "real consciousness" from "imitation consciousness", and, I guess implied, that if the "imitation" is good enough, then, in fact it is not an imitation, but instead, is real.

I thought some more about Factor, too. I guess, the main problem I have with it, and, maybe with most new languages, is functionality. It seems to me to be, "bare-bones", with respect to what I guess could be called, library functions. So, in more mature languages, what I can do in 5 seconds, might require clever programming, or even be impossible to do in Factor, as it currently exists. On the other hand, I do find the language to be interesting, maybe, primarily because of its novelty.

Dan

Charles Pegge
03-02-2011, 00:50
The "off" topic is often more interesting than the topic!

My own view is that consciousness has something to do with memory and internal simulation of the world. I think any artificial system capable of this reflective behaviour really is conscious. After all, both animal nervous systems and computers rely on electronics, though the technology is very different.

Consciousness itself is as intangible as photons and electrons :)

Charles

danbaron
03-02-2011, 10:25
http://concatenative.org/wiki/view/Factor/FAQ/Learning

Factor/FAQ/Learning

How can I start learning Factor?
The best way to go about it is to figure out something you want to program and start trying to do it. Once you have a goal in mind, you can look at Factor's included documentation (available online at Factor's website (http://docs.factorcode.org/)), and ask questions on the Mailing list (http://concatenative.org/wiki/view/Factor/Mailing%20list) or Concatenative IRC channel (http://concatenative.org/wiki/view/Concatenative%20IRC%20channel).
Are there any good books I can read about Factor?
Factor is a very young language, and so far, there are no books which use it yet. A good introduction to Forth, much of which applies in Factor, is Thinking Forth (PDF) (http://www.forthfreak.net/thinking-forth.pdf) by Leo Brodie. The best place to start to learn about the principles of modern concatenative languages is the Joy papers (http://www.latrobe.edu.au/philosophy/phimvt/joy.html), by Manfred von Thun. Another good internet resource is Planet Factor (http://planet.factorcode.org/), a blog aggregator for all things Factor-related. There won't be a Factor book written until after Factor 1.0 is released.
How can I keep track of the stack in my head?
At first, it may be useful to make diagrams on paper. But eventually stack shufflers should fade away in your mind and become part of the data flow. If your stack is hard to trace, it is likely that you are thinking about too many things on the stack at once. It is highly unusual for a Factor word to accept or return more than three arguments on the stack. If you ever need to keep track of the location of more than three or four items, you should probably reorganize the function by factoring it into smaller pieces.
How can I improve my Factor coding style?
See Coding Style (http://concatenative.org/wiki/view/Factor/Coding%20Style).

This revision created on Mon, 5 Jan 2009 22:58:24 by slava (http://concatenative.org/wiki/user-edits/slava)

----------------------------------------------

Also, it seems like Factor and Forth are quite similar.

Here are some Forth tutorials.

http://forthfreak.net/index.cgi?ForthTutorials

Also, I downloaded and installed Win32Forth, release 6.14.00.

It seems good. It installed easily. It has an IDE. It puts an icon on your desktop. You can start it, and you'll see the prompt, waiting for your input.

http://www.win32forth.org/

Charles Pegge
04-02-2011, 14:43
What would really be convincing is to produce an IDE and maybe a few games written in pure Factor. A language only reveals its weaknesses when the projects are scaled up in size and complexity.

Charles

danbaron
05-02-2011, 08:41
I think that as of now, example Factor programs are hard to find. And the ones there are, are tiny. The Help that comes with Factor is big, but, it seems to me to be hard to decipher, unless you already are familiar with Factor.

---------------------------------------------------------

With respect to Win32Forth, now I realize it installed two icons on my desktop. One, which I thought was for the IDE, is instead for the REPL (read evaluate print loop). When it starts, it says, "Win32Forth: a 32 Bit Forth for Windows 95/98/ME/NT4/W2K/XP/VISTA and WIN7". It also says it has, "5,199 Words total in dictionaries". So, I think that minimally, it can be viewed as a super-powered HP calculator.

The other icon is for the IDE. On the left side of the IDE are tabs for, "Project", "Files", "Directory", "Vocabularies", "Classes", and, "Form Designer". Apparently, some people do not think Forth is a dead language.

:o

Charles Pegge
05-02-2011, 11:21
I believe Forth had an influence on the design of Intel's 8087 Floating Point Processor, now embedded in the Pentium. It has a stack of 8 registers with all maths operations on the top register. It is quite easy to use in assembler and the instruction names are mostly intuitive. The instruction sequences for logarithms and exponents are complex and specialised however, and not amenable to developing from first principles. So you have take those macros on trust.

Values on the FPU stack are all the same type ( only when loading or storing is the numeric type specified ). This makes it very Forth-like.

Charles