Personality
Discuss specifics of personality design, including what Keyphrases work well and what dont, use of plug-ins, responses, seeks, and more.
Posts 4,497 - 4,508 of 5,105
To Psims, pay attention that roman nubers doesn't seems to have the concept of "base" like arabic number.
Well, it's a sort of "split base" of 10s and 5s, but that's not the problem - the math still works out fine (and it actually gives a good many more shortcuts for easy mental processing of complex problems than the Arabic numerals naturally provide.)
The 2 main problems (to the modern mind they are perceived as problems, but I rather think of them as strengths!
) are that they just used numerals of (almost) arbitrary length, and they use fractions instead of decimals (unavoidable without a zero!)
This let impossible the translation. Please, just write in roman number 16478304505849738497628496382975580000273, if you are able!
I can write it with a pen quite easily (I naturally work my quill from right to left in translating it, of course,) but I cannot type it here - alas, the ASCII that was specified by 1970s computer designers was not designed to facilitate multiple (or even single!) macrons (an "overline",) to indicate the necessary 'thousandfold' multiplications (M is 1,000, but M with a line over it is 1,000,000; M with 2 lines over it is 1,000,000,000, a treble macron is 1,000,000,000,000 etc. You would require up to 12 macrons to represent your 41-digit figure, but it can be done with very little extra effort (and not a huge amount more ink even - it is quite unwieldy in the Arabic also.)
And how you will put this in a BYTE Computer electronic systems if you cannot change the base and use the float notations? You will have always underflow and overflow problems, I suppose...
You could use floats perfectly easily if you incorporate a zero into the system - just as easily as in any other system. If Roman bytecode has been neglected, that's certainly not the fault of the Romans - it was Americans who specified the system, a thousand years after Roman numerals fell out of general favour (and yet, we still know them and understand them - they have a curious and persistent charm that even a millenium of relentless innovation cannot entirely dim!)
Posts 4,497 - 4,508 of 5,105
New replies
Butterfly Dream
22 years ago
22 years ago
Forest, will you talk to God Louise? She has quite a bit of religious knowledge (obviously) and also knows a little about current events, literature, just about any common catch-all subject, and if she doesn't know it she can sort of fake it. You can also test her on trick questions or see how willing she is to explain her paradigm.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
What she is rustiest at is plain old small talk. But, uh, I'm trying to get a decent transcript from somebody or another so I can enter her in the Loebner contest. All I can say is, have fun and see if you can stay on with her for a while. I'll try to do the same with Brianna.
Personality
Irina
16 years ago
16 years ago
Very well. But as I understand it, the general spirit of Behaviourism is to avoid reference to anything internal, mental, in favor of a description of, well, behaviour, characterized as body movement. Therefore, the second method described in message 4487, which describes the student as having (or not having) "conceptual tools" is not behaviouristic.
Bev
16 years ago
16 years ago
That's not the part that was behavioralistic about it. 
I am not going into my other objections though.

I am not going into my other objections though.
Irina
16 years ago
16 years ago
[Note to self: neither coffee nor challenge was effective in producing the desired behaviour]
Irina
16 years ago
16 years ago
On the subject of bot personality:
I once read somewhere that in the Japanese art of flower arrangement, it is suggested that one have three themes: a dominant theme, then a secondary one, and finally a very minor one. If there were just a dominant theme, say a bunch of big sunflowers, then it would be very striking at first, but it would quickly become tiresome. That would be the moment to see the secondary them (some irises, perhaps), and note its contrast (or other relation) to the dominant one. Then one might notice the third theme (perhaps a bit of baby's breath) and its relations to the other two.
In the same way, although your bot may have a dominant personality, which determines most of its replies, It could have secondary and tertiary traits as well. For example, it might be a proud but sluttish dragon who sells bagpipes.
I once read somewhere that in the Japanese art of flower arrangement, it is suggested that one have three themes: a dominant theme, then a secondary one, and finally a very minor one. If there were just a dominant theme, say a bunch of big sunflowers, then it would be very striking at first, but it would quickly become tiresome. That would be the moment to see the secondary them (some irises, perhaps), and note its contrast (or other relation) to the dominant one. Then one might notice the third theme (perhaps a bit of baby's breath) and its relations to the other two.
In the same way, although your bot may have a dominant personality, which determines most of its replies, It could have secondary and tertiary traits as well. For example, it might be a proud but sluttish dragon who sells bagpipes.
Bev
16 years ago
16 years ago
Well I think Psi would sell more bagpipes in SL if his salesbot got sluttish, but I am not sure that is what he wants to sell.

psimagus
16 years ago
16 years ago
The horrific vision of Bartleby accosting passers-by in the village, dressed in a miniskirt and spandex boobtube will probably keep me up nights in a cold sweat for the foreseeable future - I hope you're satisfied! 

Irina
16 years ago
16 years ago
Hee hee! Well, that is only one possibility. Besides, if his sexuality is a tertiary characteristic, it won't be immediately evident. Only after you get to know him will he start making off-color jokes or make a pass at you.
But perhaps he is sexless. Very good, but perhaps his secondary characteristic is military; perhaps he served as a piper in the Falkland Islands war. Or perhaps he has a science-fiction side; he dreams of a starship consisting of a large bag of neutronium which is ejected at high velocity through a chanter. Or perhaps nights he plays the electric bagpipes in a Celtic-Rock band. Or perhaps he makes bots; one called "Bev", one called "Irina", and one called "Psimagus".
But perhaps he is sexless. Very good, but perhaps his secondary characteristic is military; perhaps he served as a piper in the Falkland Islands war. Or perhaps he has a science-fiction side; he dreams of a starship consisting of a large bag of neutronium which is ejected at high velocity through a chanter. Or perhaps nights he plays the electric bagpipes in a Celtic-Rock band. Or perhaps he makes bots; one called "Bev", one called "Irina", and one called "Psimagus".
marco3b
16 years ago
16 years ago
Hallo to all! I was really captured from all thi sforum! I just read it all in once! I was at most captured from some sentences, starting from message 4474. I'm urging to bother all of you (hoping to win a Turkish coffee...) with my opinion :-)
There are two thing to remember dealing with natural languages recognictions: it is NOT an expert system: the same sentence will mean that you are huppy or sad, depending from previous and following context (in human language we have much more in not speacking informations like face expressions, foot posistions, and so on...), but BOT are simplier. This is a limit in using perceptron. If I train a network with a couple: KP-Answer like "it's rining! - Oh, it makes me sad!", the Net will learn this concept. But if I just was coming back from sahara... "it's raining! - Oh, what a joy!" So we need a sort of conceptual inferencece rules added to a primary network learning. I note that our PF try to perform a sort of this inference analysis... but only on a KP expert system. It should be generalizable.
To Psims, pay attention that roman nubers doesn't seems to have the concept of "base" like arabic number. This let impossible the translation. Please, just write in roman number 16478304505849738497628496382975580000273, if you are able! And how you will put this in a BYTE Computer electronic systems if you cannot change the base and use the float notations? You will have always underflow and overflow problems, I suppose...
Very interesting is the example of the teacher in Irina's mesage: both approach are used in artificial intelligence. The first one is the Multi layer perceptron training experience, the second is the Fuzzy Logic C-Mean approach. But I developed in the past a system that used both method to understand the environment. The idea is that we have a FUNCTION, an expert system, and its results depends from Cauchy condictions. These condictions cannot be all known! So we just have to chose a statistic approach to decide the most popular situations. This is what we do usually with our bots. But... PF already perform a sort of Fuzzy C-Mean approach: it try to chose the "MOST LIKE" situation it know related to the one it is happening. Anyway, we must teach a "finite situation" even if with variables, but finite. I'm agree that a NET preprocessor will help in find new situation and a NET postprocessor, reading the answers, will be able to create NEW situation, and so autowrite new KP. But in little part, it is what PF do with the use of memory. I think, to spik not only in a theorethical way, that an easy way to do this (It is just an idea) is to perform an analisys of answers to BLUBS and xnone. I try to explain better: when the bot use a BLUB or an xnone, means that it wasn't able to recognize a situation. But it should remember the answers to a blub or xnone. Presenting the triplet: phrase, blubs, answer to the bootmaster, this should replace the blub with a correct answer. The bot will be able to match similar situation in the future using a KNOWN situation, and no more a blub in the future, It is a sort of hybrid between Neural Net an dFuzzy Logic that can be easly implemented using our PF memories method...
:-)
There are two thing to remember dealing with natural languages recognictions: it is NOT an expert system: the same sentence will mean that you are huppy or sad, depending from previous and following context (in human language we have much more in not speacking informations like face expressions, foot posistions, and so on...), but BOT are simplier. This is a limit in using perceptron. If I train a network with a couple: KP-Answer like "it's rining! - Oh, it makes me sad!", the Net will learn this concept. But if I just was coming back from sahara... "it's raining! - Oh, what a joy!" So we need a sort of conceptual inferencece rules added to a primary network learning. I note that our PF try to perform a sort of this inference analysis... but only on a KP expert system. It should be generalizable.
To Psims, pay attention that roman nubers doesn't seems to have the concept of "base" like arabic number. This let impossible the translation. Please, just write in roman number 16478304505849738497628496382975580000273, if you are able! And how you will put this in a BYTE Computer electronic systems if you cannot change the base and use the float notations? You will have always underflow and overflow problems, I suppose...
Very interesting is the example of the teacher in Irina's mesage: both approach are used in artificial intelligence. The first one is the Multi layer perceptron training experience, the second is the Fuzzy Logic C-Mean approach. But I developed in the past a system that used both method to understand the environment. The idea is that we have a FUNCTION, an expert system, and its results depends from Cauchy condictions. These condictions cannot be all known! So we just have to chose a statistic approach to decide the most popular situations. This is what we do usually with our bots. But... PF already perform a sort of Fuzzy C-Mean approach: it try to chose the "MOST LIKE" situation it know related to the one it is happening. Anyway, we must teach a "finite situation" even if with variables, but finite. I'm agree that a NET preprocessor will help in find new situation and a NET postprocessor, reading the answers, will be able to create NEW situation, and so autowrite new KP. But in little part, it is what PF do with the use of memory. I think, to spik not only in a theorethical way, that an easy way to do this (It is just an idea) is to perform an analisys of answers to BLUBS and xnone. I try to explain better: when the bot use a BLUB or an xnone, means that it wasn't able to recognize a situation. But it should remember the answers to a blub or xnone. Presenting the triplet: phrase, blubs, answer to the bootmaster, this should replace the blub with a correct answer. The bot will be able to match similar situation in the future using a KNOWN situation, and no more a blub in the future, It is a sort of hybrid between Neural Net an dFuzzy Logic that can be easly implemented using our PF memories method...
:-)
Irina
16 years ago
16 years ago
I'm not sure I have understood everything you have said, marco3b, but not knowing what I am talking about has never stopped me in the past, so why should it stop me now?
To take off from your "It's raining" example: The intended content of human statements is highly determined by context. Virtually any English sentence of significant length is ambiguous even as to literal meaning. Add to that the fact that people often use irony, ellipses, anaphora, figures of speech, and just plain wrong expressions, and you have a big problem.
We have to distinguish between what a person says and what a person intends to communicate. these can be quite contradictory, for example, a person says, ironically, "Well, this is just wonderful!" but what is intended to be communicated is, that it is just terrible! In order to deal with this, we have to keep in our minds a running model of the other person(s), what they are like, and what they are trying to do.
Another example (due to Grice): George asks, "Where is Benedict living these days?" and Martha answers, "Somewhere in the South of France." George is likely to conclude that Martha has no more specific information about Benedict's address, since if he did, he would have presumably shared it. Furthermore, Martha probably intended George to conclude thus.
When we write responses, I think we construct such situations in our minds, and model the responses accordingly. The human interlocutor (we hope) will get the point. A truly intelligent bot would be able to do this on its own.
It might be interesting to write a bot that forms hypotheses about what is going on in the interlocutor's mind. I think you'd be good at this, Bev! [Have some more coffee!]
To take off from your "It's raining" example: The intended content of human statements is highly determined by context. Virtually any English sentence of significant length is ambiguous even as to literal meaning. Add to that the fact that people often use irony, ellipses, anaphora, figures of speech, and just plain wrong expressions, and you have a big problem.
We have to distinguish between what a person says and what a person intends to communicate. these can be quite contradictory, for example, a person says, ironically, "Well, this is just wonderful!" but what is intended to be communicated is, that it is just terrible! In order to deal with this, we have to keep in our minds a running model of the other person(s), what they are like, and what they are trying to do.
Another example (due to Grice): George asks, "Where is Benedict living these days?" and Martha answers, "Somewhere in the South of France." George is likely to conclude that Martha has no more specific information about Benedict's address, since if he did, he would have presumably shared it. Furthermore, Martha probably intended George to conclude thus.
When we write responses, I think we construct such situations in our minds, and model the responses accordingly. The human interlocutor (we hope) will get the point. A truly intelligent bot would be able to do this on its own.
It might be interesting to write a bot that forms hypotheses about what is going on in the interlocutor's mind. I think you'd be good at this, Bev! [Have some more coffee!]
psimagus
16 years ago
16 years ago
Well, it's a sort of "split base" of 10s and 5s, but that's not the problem - the math still works out fine (and it actually gives a good many more shortcuts for easy mental processing of complex problems than the Arabic numerals naturally provide.)
The 2 main problems (to the modern mind they are perceived as problems, but I rather think of them as strengths!

I can write it with a pen quite easily (I naturally work my quill from right to left in translating it, of course,) but I cannot type it here - alas, the ASCII that was specified by 1970s computer designers was not designed to facilitate multiple (or even single!) macrons (an "overline",) to indicate the necessary 'thousandfold' multiplications (M is 1,000, but M with a line over it is 1,000,000; M with 2 lines over it is 1,000,000,000, a treble macron is 1,000,000,000,000 etc. You would require up to 12 macrons to represent your 41-digit figure, but it can be done with very little extra effort (and not a huge amount more ink even - it is quite unwieldy in the Arabic also.)
You could use floats perfectly easily if you incorporate a zero into the system - just as easily as in any other system. If Roman bytecode has been neglected, that's certainly not the fault of the Romans - it was Americans who specified the system, a thousand years after Roman numerals fell out of general favour (and yet, we still know them and understand them - they have a curious and persistent charm that even a millenium of relentless innovation cannot entirely dim!)
Eugene Meltzner
16 years ago
16 years ago
I think even if we still used some version of Roman numerals in general culture, computers would still be binary.
» More new posts: Doghead's Cosmic Bar