Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 4,174 - 4,185 of 6,170

18 years ago #4174
Prob123 4169:

Well, yes, our bots are in a bit of jeopardy right now, no doubt, but save your downloads! At some future point someone might write a program to enliven them, or to compile them into some other language. It is at least possible. So if you make 4,096 copies in various media and send them to various places, very likely one will survive, especially if you label some of them "encrypted form of tryst between (celeb1) and (celeb2)" and the like.

technology will leave them behind. But they can be updated. You can learn and grow, why shouldn't your bot? At some point, you might equip it with a genetic algorithm so that it grows on its own. I grant that what we have now are only tiny shreds of selves, but remember, you were once a zygote!

18 years ago #4175
Who knows, at some future date, Fizzy Schizoid may RULE THE WORLD!!!!!!!!

18 years ago #4176
If the Forge ever folds, I plan to write an PF2AIML converter for sure (I've consciously written BJ from day 1 with that in mind.) You might have to rejig the plugins, and none of the AIScript will work, but the core conversation is salvageable without too much work.
But it's not going to fold - it's going to be the Conversational AI Industry Standard and take the world by storm.

18 years ago #4177
Ok, ubiquitous honesty is beginning to sound good.

18 years ago #4178
Warning, multi-part answer to Psimagus will follow. Irina's post on QP will seem taciturn and laconic in comparison. However, I will restrain myself from describing facial expressions and gestures for the moment.

Psimagus,

Thanks for the articles. Though they come in an annoying PDF format, they really help me to make my points about the limitations of MRI/fMRI and related technologies as any sort of “mind reader” or “lie detector” in forensic applications. I have no doubt that since they military is funding this research they may eventually claim they can detect lies with some device created after these studies, but at most they will have a machine that may show correlations with the suppression of truth under limited circumstances when the individual with a “normal” brain in question has had no chance to makeup the lie in advance and set his or her belief in an alternative subjective truth and the suppression of said subjective truth. This falls short of being useful in real world applications but may provide great propaganda if not questioned.

Well, a dead brain has no mind to recover - there's no functioning for a functioning magnetic resonance imaging system to record, so I agree that's probably not possible with fMRI, assuming "dead person" means "dead brain" (it could only be done by some sort of Tipleresque emulation I think.)

Assuming for a moment that there is a “mind” or soul which is separate from the brain (and there is no scientific evidence for such an entity), then all the MRI could read is your brain. It would never read your mind. The MRI could only tell you what areas of the brain are being use under specific circumstances at a given time.

A hard disk wasn't a very good analogy, sorry.

I already admitted that in the last post I made. I know it’s very faulty. No argument here. I used it to make a very limited point and didn’t really give it much thought. I never intended it to be extended and used to describe brain functions in other context.

Unlike a hard disk, our data seems to be held as much in the patterns of firing themselves as the cells that fire. We're more (or at least as much) like flash RAM, rather than a magneto-optical disc, and that won't save the data when it's powered down.


We don’t store memory exactly like RAM either; certain brain functions may, arguably resemble RAM though. Those you describe as being “read” by MRIs are records of active brain functions, not stored memory. They are reacting to new stimulus and doing specific task based on that stimulus.

But while it's still powered up, it can be read - in principle even without directly logging into it.

No, the contents of the brain cannot be “read” All that MRIs provided is a map of what parts of the brain have a tendency to be active during certain tasks. They do not claim to read the contents or be able to access the data within the brain. They can claim a certain level of correlation between given patterns of brain activity and forced lie patterns (I will come back to this definition of “lie” later”).

And that's [reading your “powered up” brain] exactly what an fMRI scanner does, just at a resolution much cruder than the individual data bits (currently.)

No, the MRI does not work that way. It provides a picture of certain neural activity for a given individual in an certain set of circumstances, but it does not read the content of your thoughts . It gives you an understanding of how the physical brain functions when someone is told to do a certain task, not what someone is thinking in general.

But it can at least image the whole brain at once in realtime.

That it can. Just as an x-ray can show you whether or not you have a broken bone (also a bad analogy for several reasons, but allow me it’s limited use for the moment please). It is a physical picture of what is happen in that person at that moment under those conditions. MRIs can give us wonderful insights into how the brain generally functions, and it is a great diagnostic tool for people with brain tumors, Alzheimer’s or other disorders. We should continue MRI research by all means. It’ application, however, can not really extent to “mind reading”, no matter how much the current US administration ties research funding to such outcomes.

But here are a couple of good papers on the latest fMRI lie detection research:

Yes, the US government funds such research and expects results. I do not have reason to doubt the integrity of the researchers involved, but given that in general in the US scientific research funding has repeatedly been slashed over the last 4-5 years, and the only “sacred cow” is anything you can tie to homeland security and fighting terrorists, researchers may have chosen to pursue a line of research the army and other government sources will fund, even if they do not find it to be the most promising or interesting applications of MRI technology. Also, they may define their terms in ways that is most likely to get them funding.

You did give two good links, however, and I will discuss them in the posts that follow.

18 years ago #4179
A good general intro at
http://www.uphs.upenn.edu/trc/langleben/emergingneurotech.pdf with (I think) balanced coverage of the main paradigms (CQT, GKT, etc.) It disentangles some of the hype, and addresses the inevitable ethical concerns.

Yes it does. Let me highlight a few points it makes:

“Given the current state of the art in neuroscience research, speculations about any impending ability to “read thoughts” of unsuspecting citizens are not realistic, and free-form mind-reading in the style described in recent films such as “Minority Report” remains science fiction (see Ross 2003). Nevertheless, there has been real, if limited , progress in finding brain correlates of certain simple memories, emotions, and behaviors, and potential applications in the social arena are foreseeable (Donaldson 2004).” (my emphasis).

There are some interesting areas of MRI research in the area called “lie detection” and some social applications may be inevitable, no matter how inappropriate and unreliable such applications may be in an inquisitorial setting. However, such applications are not always accurate, however, and we should be very skeptical of those who purport to claim they are.

Never forget the power of the purse strings and the motive behind funding certain research. The authors of that paper go on to note, “In the United States, defense related agencies have dedicated significant funds to the development of new lie-detection strategies for eventual use in criminal and terrorist investigations.” The researchers themselves may have the best of intentions. How their work may be used or misused by others is another matter.

In a section labeled “The Hype” the authors of the paper explain, “It is not surprising, therefore, that the media have spread an overly optimistic perception that these methods will soon become useful for practical application. Moreover, the proprietary “brain fingerprinting” technology has been the subject of few peer-reviewed publications, and those that exist are by Dr. Farwell and his colleagues, covering less than 50 subjects altogether and raising obvious concerns about conflict of interest."

This is exactly my point. It is not that they cannot do MRI studies. Some of these studies are interesting and important. It’s that the media has blown these studies out of proportion and made people believe that the MRI can read your mind and be used by the government to reliably tell when someone is lying. It cannot, and I doubt it will ever be able to do so. Some people within the government may have an interest in spreading propaganda based on the over generalizing and hype surrounding these studies in a “trust us , it’s science” sort of way. Others may just like to sensationalize and distort these studies for reasons of their own. It’s still not good science.

There must be careful scrutiny to protect from high potential for political rather than scientific use of these studies. The authors of that paper state, “High technology tools such as brain scans can give a persuasive scientific gloss to what in reality are subjective interpretations of the data." I could not agree more.

18 years ago #4180
and (rather more complicated,) http://www.uphs.upenn.edu/trc/langleben/tellingtruth.pdf Quotes a reliability of 97% lie detection and 93% truth detection (see Table 1).

This study reports, “Lie was discriminated from truth on a single-event level with an accuracy of 78%, while the predictive ability expressed as the area under the curve (AUC) of the receiver operator characteristic curve (ROC) was 85%."

Table 1 gives some data that was subject to further statistical analysis. Even the authors of the study do not claim (and I doubt will ever claim) that they could get 100% under any conditions. No one who does human research claim 100% on anything.

I can't fault their statistical method, and I think that's pretty darned good for a field of study this young.

I don’t fault their method, or their conclusion that in limited circumstances, there is a certain level of correlation between some MRI patterns and a forced “lie” as they defined it in this experiment. It is unwise, however, to generalizes these results to “lie detections” in the real word. If you read the study carefully, they are making very limited claims about what they were able to do, and they are careful to define “lie” as a forced choice.

Since you bring it up, let’s examine their method and definitions, so that we don’t fall into the trap of equivocation and overgeneralization so often exhibited by the popular press. The authors report that the participants were twenty-six right-handed male undergraduate students with a mean age of 19. Your brain continues growing throughout your life, and 19 year olds do not have “mature brains” in any sense of the word. The study controlled for brain dominance because they know that a significant portion of the population is wired differently in that respect and may not fit the pattern they hope to establish. They controlled for gender for similar reasons (there are many reported differences between the male and female brain). They should control for these factors but when reading these studies we must note that we should not generalize the results to all people based on results of a small number of select participants used in human research.

This experiment involved what they called a forced lie task. They state that a “pseudorandom sequence of photographed playing cards was presented. The series included five stimulus classes: (1) Lie (5 of clubs or 7 of spades); (2) Truth (5 of clubs or 7 of spades); (3) recurrent distracter (2 of hearts); (4) variant distracter (remaining cards 2–10, all suits); and (5) null (back of a card).” This study would be more appropriately used to detect “tells” in playing poker than in forensic applications. There are most likely different brain functions used in seeing one thing and saying another than in either making up a lie on the spot unrelated to stimuli in front of you or in repeating a lie you have memorized and conditioned yourself to mimic a believed (or to tell a story in which you have suspended disbelief).

Creating a lie, repeating a lie and reporting one thing while seeing another may all be called “lies” but they are not the same brain functions. Have you ever seen the demonstration of the “Stroop effect” which examines how difficult it is to see a color and read a word that names a different color? You can try it for yourself here http://www.apa.org/science/stroop.html. I would expect the type of suppression of “truth” reported in this study and the type of suppression of two different messages regarding stimuli (e.g. as in the Stroop effect) would be more similar than the patterns of various types of “lies” one may tell.

The authors of this study were careful to define their terms and limit the application of their results. They said in their discussion of the results, “the final common denominator of intentional deception could be conceptualized as a conscious act of suppression of information that is subjectively true. This may or may not be accompanied by a release of subjectively false information.” They acknowledge subjective truths may vary and that the release of subjectively false information may be another matter. It is the release of false information most people think of as a lie. They make no claims of being able to detect such lies. The do say, “Although lie and truth are mediated by a similar frontoparietal network, lie appears to be a more working memory-intensive activity, characterized by increased activation of the inferolateral cortex implicated in response selection, inhibition, and generation.” However, trying to remember a past event may be equally memory intensive, even when is telling the subjective truth to the best of one's ability.


18 years ago #4181
With MRI scanning resolution improving exponentially year by year, I can't see it not reaching virtually 100% very soon now.

The problem with using MRIs as lie detectors in real world applications is not the resolution of the MRI technology. The problem is the way the human brain works is not exactly the same form person to person, and there are many confounding variables.

Other brain functions may have the same or similar patterns to those observed in “lying” in these experiments. The authors of the study you referenced note, “Critical questions remain concerning the use of fMRI in lie detection. First, the pattern of activation reported in deception studies was also observed in studies of working memory, error monitoring, response selection, and “target” detection [Hester et al., 2004; Huettel and McCarthy, 2004; Zarahn et al., 2004]. “ In other words, just because these patterns may correlate with a forced lie, doesn’t mean they cannot correlate with other activities. The person may be suppressing the truth of stimulus presented at that moment, or checking for errors. Higher resolution won’t fix this issue because our brain uses the same areas to do more than one task.

The conditions under which the MRI is given may effect the results. The authors continue, “Second, inference in the General Linear Model (GLM) analysis of blood oxygenation level-dependent (BOLD) fMRI is based on a contrast of conditions, making the choice of a control condition critical. Thus, the difference in the attentional value (salience) of the cue (condition) intended to elicit “lie” and the control “truth” may have confounded previous studies [Cabeza et al., 2003; Guet al., 2004; Langleben et al., 2002]. “ (my emphasis). So, there are possibly confounding variables (other explanations for these results besides lying) and the conditions in which the MRI is given is critical. Higher resolution MRIs will do diddle squat to make real word conditions (especially inquisitorial settings) like lab conditions in these studies.

There are other problems with the generalization and forensic application of such studies The authors admit, “Finally, the sensitivity and specificity with which an fMRI experiment can discriminate lie from truth in the individual subject or single event level is unknown.” Higher resolution does not solve these problems.

Have you considered my point about the brain’s plasticity? (See http://books.google.com/books?hl=en&lr=&id=MMaujNPDptAC&oi=fnd&pg=RA1-PR10&sig=F7xwhWzCREELRZGr9twi15kcZbM&dq=brain+plasticity#PPR5,M1). One aspect of this is that if part of the brain is injured, the brain reroutes the tasks that used to be performed by that part to other areas. Another aspect is that we use various parts of our brain in different ways depending on our age and stage of development. Furthermore, a change in observed brain patterns (plasticity) can be induced through training (http://brain.oxfordjournals.org/cgi/content/abstract/122/9/1781). Do you want to create a group of people who have trained their brain to lie differently? Each of us has a unique brain developing under unique conditions, you cannot reliably say that any generalization about a specific brain function can be applied to an individual with no chance of error.

When studying large groups of people, you will find patterns about what areas of the brain are often used in certain tasks. This does not mean every individual will exhibit the same pattern. The brain itself is constantly adapting and changing what parts of itself is used, and for what tasks. So these 26 young men in this study used these parts of there brain for the given task of a forced lie. Within this group, there was difference in patterns depending on whether the participant used their right hand or left hand to complete the task. It follows that there may be other differences based on how the participant completes the task and any individual’s particular neurological development and condition.


18 years ago #4182
But seriously - I think ubiquitous honesty is the logical next step for our species.

Morally and philosophically you may have a point. I do not, however, believe it has a scientific basis. “Lying” is an evolutionally advantage, interconnected with creativity, empathy and abstract thinking. Without the ability to state that which is not true, we cannot think of hypothetical’s or conduct thought experiments or have fantasies. Lying is an important part of the human experience.

It's time humans stopped lying to each other, and if they couldn't get away with it, they wouldn't do it (especially the politicians.) It's scary, sure - growing up always is, but it's an unavoidable step towards maturity.

If the politicians really believed the “lie detectors” worked and could be used on them (as opposed to being a tool of propaganda in the war on terror) they would cut the funding immediately and ban all such research.

Wars couldn't be fought, people couldn't be exploited, criminals couldn't escape justice, the innocent couldn't suffer miscarriages of justice, if we had a totally reliable and universally available truth verification technology. People would stop lying, because it simply wouldn't have any advantage any more.

Ha! We know there were never any WMDs, we have evidence our leader knew that too and we are still at war. Lies and truth mean nothing next to belief , desire and ambition. Politics is called the master science for a reason. Besides, even at best version of your lie detectors, if such mechanisms existed, would only tell you whether the individual believed they were lying—a subjective truth. There is much research to show the difference between subjective truth and actual events, but I have no time to go pull it up now.

MRIs and the like are wonderful things. We should continue to do research by all means. We should especially concentrate on areas such as diagnosis and treatment of neurological disease. However, as “lie detectors” or “mind readers” we need to be very skeptical and cautious before accepting generalizations claims of scientific truth based on such data.

18 years ago #4183
“Lying” is an evolutionally advantage, interconnected with creativity, empathy and abstract thinking. Without the ability to state that which is not true, we cannot think of hypothetical’s or conduct thought experiments or have fantasies. Lying is an important part of the human experience.

IMHO [heartbeat accelerates], perhaps the generation of fictions, hypotheses, metaphors [e.g., saying, "my love is a rose" when you know very well she's not] and other figures of speech, fantasies, jokes, ironic and sarcastic statements, and perhaps other kinds of deliberate literal falsehoods are not lies. To lie is to intentionally state a falsehood with intent to deceive.

It strikes me as a layperson [epinephrine concentration increases] that old-fashioned lie detectors were not detecting falsehood, or [pupils contract] the intent to deceive, but [skin pales] the anxiety experienced by the [blood flow in posterior cingulate gyrus increases] liar on account of fear of [size and frequency of saccadic jumps rises] detection. Is it possible [skin conductivity rises] that these new methods are also detecting something of the sort? [wrings hands] In which case they are n-n-not 'mind-readers' at all, j-just [cringing expression] 'emotion-readers.' [Output of pheromone TX-trans-op56 increases, vd-AB+ decreases.]

18 years ago #4184
[mostly re: 4178]
Bev,

Too many excellent points for me to deal with all of them until I get another dose of sleep.

Just a few quick points -

Dr. Farwell's 'brain fingerprinting' isn't MRI-based - it's a much inferior EEG-based technique that's only one step up from the polygraph.
The polygraph is claimed to be something between 70-90% accurate, and that's a problem: we do need a lot more accuracy - as close to 100% as possible. But a bigger problem is that it's very bulky and needs trained specialists to interpret the data. But with computer power advancing and miniaturising the way it is, it can't be long before automated data analysis outperforms human interpretation by a long way.

Yes, the US Government funds much of it ATM, but there are research institutes all over the world not under their control. And this isn't multi-billion $ 'particle collider' stuff. The only expensive bit of equipment (the MRI scanner itself) is standard issue in most western hospitals and medical research facilities. If the US choose to supress their research, the rest of the world will publish, commercialize and market product, if only because there's a profit to be made.

I agree that MRI-based technology is unlikely to be portable any time soon - apart from the great hazard that using it in public would constitute (watch out any passing pedestrians with steel pins or pacemakers!) But the origins of the polygraph go back to 1913. Nearly a century later, with all our technical advances, I think the time is ripe for a more reliable technology. Perhaps near infrared laser spectroscopy, perhaps microfeature recognition, or thermal imaging, perhaps a combination of strategies. None of these is particularly expensive.
And as the computer power for analysing the data carries on getting exponentially cheaper, so any hope of governmental control slips away. What's the US Government going to do when Toshiba or Samsung start marketing the things? Bomb Tokyo or Seoul? Ban their import? Like that's ever worked! Frankly I think politicians are too fixated on their own crooked schemes to appreciate the danger until it's too late.

And that's [reading your “powered up” brain] exactly what an fMRI scanner does, just at a resolution much cruder than the individual data bits (currently.)

No, the MRI does not work that way.

Well, only because the resolution isn't up imaging individual cellular signalling yet. They have to work in voxels (volumetric pixels,) of aggregates of cells. But in the same way as you can recognize a surprisingly low-definition pixelated picture of something (100 pixels can often reliably encode a known face,) a low-definition voxelated video, the evidence seems increasingly to show, can convey reliable data.
Sure, we won't be reading individual thoughts yet, and certainly not porting minds to silicon for as long as it takes to scale up another few orders of magnitude, but I see no intrinsic problem in doing so once the technology has the bandwidth to map an entire brain at a cellular level in realtime (how ever many fps that needs to be to achieve the necessary definition.) But that is a whole other rung of the ladder. And not necessarily the next one after UH™.

Do you really think it is intrinsically impossible, even given that the human race (and it's exponential technological growth,) may continue to evolve for another 100 million years? Or more? If not, then we're only arguing about timing. In the last 100 million years, we've progressed from arboreal rodents in a world dominated by reptiles to putting a man on the moon. And 99.999% of that technological advance has happened in the last few hundred years - it's massively exponential.

But if you think it is intrinsically impossible... well, I'd like to know why, because it seems unnecessarily pessimistic to me. Our technology has reached the point on an exponential curve where it's about to go asymptotic into some sort of Singularity, unless we blow ourselves up, or someone switches the universe off by mistake. And I think there's at least as good a chance that that won't happen as that it will. At least not before we figure out how to perfect a low-tech, reliable lie detector. It's a pretty modest goal (compared to some of the weird and wonderful stuff that gets suggested by people like me )

But it can at least image the whole brain at once in realtime.

That it can. Just as an x-ray can show you whether or not you have a broken bone

Except a fMRI does it in video - not just a snapshot. That's the point of the "functional". And that's a lot more relational data than a picture can ever paint.

I'll respond to your comments on the 2 articles (sorry they're pdf - I know it's a PITA) when I've resynched my body clock.

18 years ago #4185
“Lying” is an evolutionally advantage

So were tails. And before that fins.

It doesn't mean we still need it - especially if it gives an excuse for cynical politicians to start wars in a world already over-proliferated with WMDs.


Posts 4,174 - 4,185 of 6,170

» More new posts: Doghead's Cosmic Bar