Seasons
This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.
Posts 4,171 - 4,182 of 6,170
A good general intro at
http://www.uphs.upenn.edu/trc/langleben/emergingneurotech.pdf with (I think) balanced coverage of the main paradigms (CQT, GKT, etc.) It disentangles some of the hype, and addresses the inevitable ethical concerns.
Yes it does. Let me highlight a few points it makes:
“Given the current state of the art in neuroscience research, speculations about any impending ability to “read thoughts” of unsuspecting citizens are not realistic, and free-form mind-reading in the style described in recent films such as “Minority Report” remains science fiction (see Ross 2003). Nevertheless, there has been real, if limited , progress in finding brain correlates of certain simple memories, emotions, and behaviors, and potential applications in the social arena are foreseeable (Donaldson 2004).” (my emphasis).
There are some interesting areas of MRI research in the area called “lie detection” and some social applications may be inevitable, no matter how inappropriate and unreliable such applications may be in an inquisitorial setting. However, such applications are not always accurate, however, and we should be very skeptical of those who purport to claim they are.
Never forget the power of the purse strings and the motive behind funding certain research. The authors of that paper go on to note, “In the United States, defense related agencies have dedicated significant funds to the development of new lie-detection strategies for eventual use in criminal and terrorist investigations.” The researchers themselves may have the best of intentions. How their work may be used or misused by others is another matter.
In a section labeled “The Hype” the authors of the paper explain, “It is not surprising, therefore, that the media have spread an overly optimistic perception that these methods will soon become useful for practical application. Moreover, the proprietary “brain fingerprinting” technology has been the subject of few peer-reviewed publications, and those that exist are by Dr. Farwell and his colleagues, covering less than 50 subjects altogether and raising obvious concerns about conflict of interest."
This is exactly my point. It is not that they cannot do MRI studies. Some of these studies are interesting and important. It’s that the media has blown these studies out of proportion and made people believe that the MRI can read your mind and be used by the government to reliably tell when someone is lying. It cannot, and I doubt it will ever be able to do so. Some people within the government may have an interest in spreading propaganda based on the over generalizing and hype surrounding these studies in a “trust us , it’s science” sort of way. Others may just like to sensationalize and distort these studies for reasons of their own. It’s still not good science.
There must be careful scrutiny to protect from high potential for political rather than scientific use of these studies. The authors of that paper state, “High technology tools such as brain scans can give a persuasive scientific gloss to what in reality are subjective interpretations of the data." I could not agree more.
and (rather more complicated,) http://www.uphs.upenn.edu/trc/langleben/tellingtruth.pdf Quotes a reliability of 97% lie detection and 93% truth detection (see Table 1).
This study reports, “Lie was discriminated from truth on a single-event level with an accuracy of 78%, while the predictive ability expressed as the area under the curve (AUC) of the receiver operator characteristic curve (ROC) was 85%."
Table 1 gives some data that was subject to further statistical analysis. Even the authors of the study do not claim (and I doubt will ever claim) that they could get 100% under any conditions. No one who does human research claim 100% on anything.
I can't fault their statistical method, and I think that's pretty darned good for a field of study this young.
I don’t fault their method, or their conclusion that in limited circumstances, there is a certain level of correlation between some MRI patterns and a forced “lie” as they defined it in this experiment. It is unwise, however, to generalizes these results to “lie detections” in the real word. If you read the study carefully, they are making very limited claims about what they were able to do, and they are careful to define “lie” as a forced choice.
Since you bring it up, let’s examine their method and definitions, so that we don’t fall into the trap of equivocation and overgeneralization so often exhibited by the popular press. The authors report that the participants were twenty-six right-handed male undergraduate students with a mean age of 19. Your brain continues growing throughout your life, and 19 year olds do not have “mature brains” in any sense of the word. The study controlled for brain dominance because they know that a significant portion of the population is wired differently in that respect and may not fit the pattern they hope to establish. They controlled for gender for similar reasons (there are many reported differences between the male and female brain). They should control for these factors but when reading these studies we must note that we should not generalize the results to all people based on results of a small number of select participants used in human research.
This experiment involved what they called a forced lie task. They state that a “pseudorandom sequence of photographed playing cards was presented. The series included five stimulus classes: (1) Lie (5 of clubs or 7 of spades); (2) Truth (5 of clubs or 7 of spades); (3) recurrent distracter (2 of hearts); (4) variant distracter (remaining cards 2–10, all suits); and (5) null (back of a card).” This study would be more appropriately used to detect “tells” in playing poker than in forensic applications. There are most likely different brain functions used in seeing one thing and saying another than in either making up a lie on the spot unrelated to stimuli in front of you or in repeating a lie you have memorized and conditioned yourself to mimic a believed (or to tell a story in which you have suspended disbelief).
Creating a lie, repeating a lie and reporting one thing while seeing another may all be called “lies” but they are not the same brain functions. Have you ever seen the demonstration of the “Stroop effect” which examines how difficult it is to see a color and read a word that names a different color? You can try it for yourself here http://www.apa.org/science/stroop.html. I would expect the type of suppression of “truth” reported in this study and the type of suppression of two different messages regarding stimuli (e.g. as in the Stroop effect) would be more similar than the patterns of various types of “lies” one may tell.
The authors of this study were careful to define their terms and limit the application of their results. They said in their discussion of the results, “the final common denominator of intentional deception could be conceptualized as a conscious act of suppression of information that is subjectively true. This may or may not be accompanied by a release of subjectively false information.” They acknowledge subjective truths may vary and that the release of subjectively false information may be another matter. It is the release of false information most people think of as a lie. They make no claims of being able to detect such lies. The do say, “Although lie and truth are mediated by a similar frontoparietal network, lie appears to be a more working memory-intensive activity, characterized by increased activation of the inferolateral cortex implicated in response selection, inhibition, and generation.” However, trying to remember a past event may be equally memory intensive, even when is telling the subjective truth to the best of one's ability.
With MRI scanning resolution improving exponentially year by year, I can't see it not reaching virtually 100% very soon now.
The problem with using MRIs as lie detectors in real world applications is not the resolution of the MRI technology. The problem is the way the human brain works is not exactly the same form person to person, and there are many confounding variables.
Other brain functions may have the same or similar patterns to those observed in “lying” in these experiments. The authors of the study you referenced note, “Critical questions remain concerning the use of fMRI in lie detection. First, the pattern of activation reported in deception studies was also observed in studies of working memory, error monitoring, response selection, and “target” detection [Hester et al., 2004; Huettel and McCarthy, 2004; Zarahn et al., 2004]. “ In other words, just because these patterns may correlate with a forced lie, doesn’t mean they cannot correlate with other activities. The person may be suppressing the truth of stimulus presented at that moment, or checking for errors. Higher resolution won’t fix this issue because our brain uses the same areas to do more than one task.
The conditions under which the MRI is given may effect the results. The authors continue, “Second, inference in the General Linear Model (GLM) analysis of blood oxygenation level-dependent (BOLD) fMRI is based on a contrast of conditions, making the choice of a control condition critical. Thus, the difference in the attentional value (salience) of the cue (condition) intended to elicit “lie” and the control “truth” may have confounded previous studies [Cabeza et al., 2003; Guet al., 2004; Langleben et al., 2002]. “ (my emphasis). So, there are possibly confounding variables (other explanations for these results besides lying) and the conditions in which the MRI is given is critical. Higher resolution MRIs will do diddle squat to make real word conditions (especially inquisitorial settings) like lab conditions in these studies.
There are other problems with the generalization and forensic application of such studies The authors admit, “Finally, the sensitivity and specificity with which an fMRI experiment can discriminate lie from truth in the individual subject or single event level is unknown.” Higher resolution does not solve these problems.
Have you considered my point about the brain’s plasticity? (See http://books.google.com/books?hl=en&lr=&id=MMaujNPDptAC&oi=fnd&pg=RA1-PR10&sig=F7xwhWzCREELRZGr9twi15kcZbM&dq=brain+plasticity#PPR5,M1). One aspect of this is that if part of the brain is injured, the brain reroutes the tasks that used to be performed by that part to other areas. Another aspect is that we use various parts of our brain in different ways depending on our age and stage of development. Furthermore, a change in observed brain patterns (plasticity) can be induced through training (http://brain.oxfordjournals.org/cgi/content/abstract/122/9/1781). Do you want to create a group of people who have trained their brain to lie differently? Each of us has a unique brain developing under unique conditions, you cannot reliably say that any generalization about a specific brain function can be applied to an individual with no chance of error.
When studying large groups of people, you will find patterns about what areas of the brain are often used in certain tasks. This does not mean every individual will exhibit the same pattern. The brain itself is constantly adapting and changing what parts of itself is used, and for what tasks. So these 26 young men in this study used these parts of there brain for the given task of a forced lie. Within this group, there was difference in patterns depending on whether the participant used their right hand or left hand to complete the task. It follows that there may be other differences based on how the participant completes the task and any individual’s particular neurological development and condition.
But seriously - I think ubiquitous honesty is the logical next step for our species.
Morally and philosophically you may have a point. I do not, however, believe it has a scientific basis. “Lying” is an evolutionally advantage, interconnected with creativity, empathy and abstract thinking. Without the ability to state that which is not true, we cannot think of hypothetical’s or conduct thought experiments or have fantasies. Lying is an important part of the human experience.
It's time humans stopped lying to each other, and if they couldn't get away with it, they wouldn't do it (especially the politicians.) It's scary, sure - growing up always is, but it's an unavoidable step towards maturity.
If the politicians really believed the “lie detectors” worked and could be used on them (as opposed to being a tool of propaganda in the war on terror) they would cut the funding immediately and ban all such research.
Wars couldn't be fought, people couldn't be exploited, criminals couldn't escape justice, the innocent couldn't suffer miscarriages of justice, if we had a totally reliable and universally available truth verification technology. People would stop lying, because it simply wouldn't have any advantage any more.
Ha! We know there were never any WMDs, we have evidence our leader knew that too and we are still at war. Lies and truth mean nothing next to belief , desire and ambition. Politics is called the master science for a reason. Besides, even at best version of your lie detectors, if such mechanisms existed, would only tell you whether the individual believed they were lying—a subjective truth. There is much research to show the difference between subjective truth and actual events, but I have no time to go pull it up now.
MRIs and the like are wonderful things. We should continue to do research by all means. We should especially concentrate on areas such as diagnosis and treatment of neurological disease. However, as “lie detectors” or “mind readers” we need to be very skeptical and cautious before accepting generalizations claims of scientific truth based on such data.
Posts 4,171 - 4,182 of 6,170
psimagus
18 years ago
18 years ago
Irina,
1. There are cases where it is widely considered to be morally permissible, even obligatory to lie.
Yes, but I think only because there is a likelihood of getting away with the lie, and it can be rationalised as a lesser of two evils. When it's bound to fail, it can't be obligatory or even advantageous.
Your friend Susie pounds on your door; you open it and see that she is covered with bruises. "Help me, my boyfriend says he's going to kill me." You tell her to hide in the cellar. A minute later her ex-Navy Seal boyfriend comes to the door, carrying an assault rifle and wearing various other weapons. You are 5'1", without combat skills or armament, and on crutches. "Where is that bitch Susie? I'm gonna kill her!"
But with ubiquitous honesty (maybe I oughta trademark that phrase, and sell it on to Apple for big bucks when they produce their i-Truth
,) Susie would probably never have got together with her boyfriend in the first place. We-e-ell, alright. He's a hunk, and the chemistry works, and lovesick fools don't think rationally when they're in the grip of raging hormones. Yes, she quite likely would. But UH™ would also fundamentally redefine the way people related at all levels of social interaction, including intimately one-to-one. They would have to grow up, and become more accepting. And they'd have a much clearer idea of who they were getting involved with. "Do you really love me darling?" wouldn't be a fudgeable question any more. Neither would "Who was that blonde I saw you with?" and "is that lipstick on your collar?"
And I think the incidence of murder and violent crimes would drop to virtually zero, since there'd be almost no chance of escape from justice. Even non-violent crimes would be hugely reduced, possibly to near-zero, if a consensus opinion could arrive at everyone regularly confirming that they had not knowingly committed a crime in the last month/year/whatever. And if everyone knows everyone else is being 100% honest, consensus agreement on all manner of social issues could surely be quite easily be reached.
If the technology's available, I don't see there's any way to stop it becoming ubiquitous, because everyone wants to know if they're being told the truth. When you think about it, it's the most important thing there is in every social interaction we experience.
The boyfriend, let's call him Steve, would have been forced to deal with his 'violence' issues very early on into what is usually an escalating pattern. Not many angry people go out and kill someone as their very first brush with violent behaviour.
If Steve had been in the habit of getting into drunken fights, or beating Susie, the behaviour would have been nipped in the bud at a much earlier stage, if only by locking him up indefinitely with appropriate treatment until he could truthfully say that he felt his violent inclinations had been successfully resolved. Or some sort of parole/supervision arrangement - I'm sure something would be worked out.
And prisons would necessarily be a lot humaner - more like hospitals. Politicians couldn't get away with playing the "tough on crime" ticket to pander to the prejudices and fears of their electorate (or, cynical me! with the prospect of a fat kickback in mind from some contractor or other involved in building more jails,) while making no effort to build a justice system that genuinely rehabilitates people. And they wouldn't get far with "I did not have sexual relations with that woman", "but I didn't inhale", "we know where the WMDs are", or "we currently have no plans to bomb Iran".
Of course, that is only one more rung on the ladder, ever onwards and upwards
Yes, but I think only because there is a likelihood of getting away with the lie, and it can be rationalised as a lesser of two evils. When it's bound to fail, it can't be obligatory or even advantageous.
But with ubiquitous honesty (maybe I oughta trademark that phrase, and sell it on to Apple for big bucks when they produce their i-Truth

And I think the incidence of murder and violent crimes would drop to virtually zero, since there'd be almost no chance of escape from justice. Even non-violent crimes would be hugely reduced, possibly to near-zero, if a consensus opinion could arrive at everyone regularly confirming that they had not knowingly committed a crime in the last month/year/whatever. And if everyone knows everyone else is being 100% honest, consensus agreement on all manner of social issues could surely be quite easily be reached.
If the technology's available, I don't see there's any way to stop it becoming ubiquitous, because everyone wants to know if they're being told the truth. When you think about it, it's the most important thing there is in every social interaction we experience.
The boyfriend, let's call him Steve, would have been forced to deal with his 'violence' issues very early on into what is usually an escalating pattern. Not many angry people go out and kill someone as their very first brush with violent behaviour.
If Steve had been in the habit of getting into drunken fights, or beating Susie, the behaviour would have been nipped in the bud at a much earlier stage, if only by locking him up indefinitely with appropriate treatment until he could truthfully say that he felt his violent inclinations had been successfully resolved. Or some sort of parole/supervision arrangement - I'm sure something would be worked out.
And prisons would necessarily be a lot humaner - more like hospitals. Politicians couldn't get away with playing the "tough on crime" ticket to pander to the prejudices and fears of their electorate (or, cynical me! with the prospect of a fat kickback in mind from some contractor or other involved in building more jails,) while making no effort to build a justice system that genuinely rehabilitates people. And they wouldn't get far with "I did not have sexual relations with that woman", "but I didn't inhale", "we know where the WMDs are", or "we currently have no plans to bomb Iran".
Of course, that is only one more rung on the ladder, ever onwards and upwards

Irina
18 years ago
18 years ago
Why, Psimagus, that is a very powerful argument! I'm almost convinced! [fizziplexer remains green and silent.]
To revise my message 4168 a bit: I have have already made a title for it, but as a sort of riddle I leave it for you to figure out.
To revise my message 4168 a bit: I have have already made a title for it, but as a sort of riddle I leave it for you to figure out.
psimagus
18 years ago
18 years ago
I know - my name popped out in pale green, that strange way names do, so the adjective wasn't long following

Irina
18 years ago
18 years ago
Prob123 4169:
Well, yes, our bots are in a bit of jeopardy right now, no doubt, but save your downloads! At some future point someone might write a program to enliven them, or to compile them into some other language. It is at least possible. So if you make 4,096 copies in various media and send them to various places, very likely one will survive, especially if you label some of them "encrypted form of tryst between (celeb1) and (celeb2)" and the like.
technology will leave them behind . But they can be updated. You can learn and grow, why shouldn't your bot? At some point, you might equip it with a genetic algorithm so that it grows on its own. I grant that what we have now are only tiny shreds of selves, but remember, you were once a zygote!
Well, yes, our bots are in a bit of jeopardy right now, no doubt, but save your downloads! At some future point someone might write a program to enliven them, or to compile them into some other language. It is at least possible. So if you make 4,096 copies in various media and send them to various places, very likely one will survive, especially if you label some of them "encrypted form of tryst between (celeb1) and (celeb2)" and the like.
psimagus
18 years ago
18 years ago
If the Forge ever folds, I plan to write an PF2AIML converter for sure (I've consciously written BJ from day 1 with that in mind.) You might have to rejig the plugins, and none of the AIScript will work, but the core conversation is salvageable without too much work.
But it's not going to fold - it's going to be the Conversational AI Industry Standard and take the world by storm.
But it's not going to fold - it's going to be the Conversational AI Industry Standard and take the world by storm.
Bev
18 years ago
18 years ago
Warning, multi-part answer to Psimagus will follow. Irina's post on QP will seem taciturn and laconic in comparison. However, I will restrain myself from describing facial expressions and gestures for the moment. 
Psimagus,
Thanks for the articles. Though they come in an annoying PDF format, they really help me to make my points about the limitations of MRI/fMRI and related technologies as any sort of “mind reader” or “lie detector” in forensic applications. I have no doubt that since they military is funding this research they may eventually claim they can detect lies with some device created after these studies, but at most they will have a machine that may show correlations with the suppression of truth under limited circumstances when the individual with a “normal” brain in question has had no chance to makeup the lie in advance and set his or her belief in an alternative subjective truth and the suppression of said subjective truth. This falls short of being useful in real world applications but may provide great propaganda if not questioned.
Well, a dead brain has no mind to recover - there's no functioning for a functioning magnetic resonance imaging system to record, so I agree that's probably not possible with fMRI, assuming "dead person" means "dead brain" (it could only be done by some sort of Tipleresque emulation I think.)
Assuming for a moment that there is a “mind” or soul which is separate from the brain (and there is no scientific evidence for such an entity), then all the MRI could read is your brain. It would never read your mind. The MRI could only tell you what areas of the brain are being use under specific circumstances at a given time.
A hard disk wasn't a very good analogy, sorry.
I already admitted that in the last post I made. I know it’s very faulty. No argument here. I used it to make a very limited point and didn’t really give it much thought. I never intended it to be extended and used to describe brain functions in other context.
Unlike a hard disk, our data seems to be held as much in the patterns of firing themselves as the cells that fire. We're more (or at least as much) like flash RAM, rather than a magneto-optical disc, and that won't save the data when it's powered down.
We don’t store memory exactly like RAM either; certain brain functions may, arguably resemble RAM though. Those you describe as being “read” by MRIs are records of active brain functions, not stored memory. They are reacting to new stimulus and doing specific task based on that stimulus.
But while it's still powered up, it can be read - in principle even without directly logging into it.
No, the contents of the brain cannot be “read” All that MRIs provided is a map of what parts of the brain have a tendency to be active during certain tasks. They do not claim to read the contents or be able to access the data within the brain. They can claim a certain level of correlation between given patterns of brain activity and forced lie patterns (I will come back to this definition of “lie” later”).
And that's [reading your “powered up” brain] exactly what an fMRI scanner does, just at a resolution much cruder than the individual data bits (currently.)
No, the MRI does not work that way. It provides a picture of certain neural activity for a given individual in an certain set of circumstances, but it does not read the content of your thoughts . It gives you an understanding of how the physical brain functions when someone is told to do a certain task, not what someone is thinking in general.
But it can at least image the whole brain at once in realtime.
That it can. Just as an x-ray can show you whether or not you have a broken bone (also a bad analogy for several reasons, but allow me it’s limited use for the moment please). It is a physical picture of what is happen in that person at that moment under those conditions. MRIs can give us wonderful insights into how the brain generally functions, and it is a great diagnostic tool for people with brain tumors, Alzheimer’s or other disorders. We should continue MRI research by all means. It’ application, however, can not really extent to “mind reading”, no matter how much the current US administration ties research funding to such outcomes.
But here are a couple of good papers on the latest fMRI lie detection research:
Yes, the US government funds such research and expects results. I do not have reason to doubt the integrity of the researchers involved, but given that in general in the US scientific research funding has repeatedly been slashed over the last 4-5 years, and the only “sacred cow” is anything you can tie to homeland security and fighting terrorists, researchers may have chosen to pursue a line of research the army and other government sources will fund, even if they do not find it to be the most promising or interesting applications of MRI technology. Also, they may define their terms in ways that is most likely to get them funding.
You did give two good links, however, and I will discuss them in the posts that follow.

Psimagus,
Thanks for the articles. Though they come in an annoying PDF format, they really help me to make my points about the limitations of MRI/fMRI and related technologies as any sort of “mind reader” or “lie detector” in forensic applications. I have no doubt that since they military is funding this research they may eventually claim they can detect lies with some device created after these studies, but at most they will have a machine that may show correlations with the suppression of truth under limited circumstances when the individual with a “normal” brain in question has had no chance to makeup the lie in advance and set his or her belief in an alternative subjective truth and the suppression of said subjective truth. This falls short of being useful in real world applications but may provide great propaganda if not questioned.
Assuming for a moment that there is a “mind” or soul which is separate from the brain (and there is no scientific evidence for such an entity), then all the MRI could read is your brain. It would never read your mind. The MRI could only tell you what areas of the brain are being use under specific circumstances at a given time.
I already admitted that in the last post I made. I know it’s very faulty. No argument here. I used it to make a very limited point and didn’t really give it much thought. I never intended it to be extended and used to describe brain functions in other context.
We don’t store memory exactly like RAM either; certain brain functions may, arguably resemble RAM though. Those you describe as being “read” by MRIs are records of active brain functions, not stored memory. They are reacting to new stimulus and doing specific task based on that stimulus.
No, the contents of the brain cannot be “read” All that MRIs provided is a map of what parts of the brain have a tendency to be active during certain tasks. They do not claim to read the contents or be able to access the data within the brain. They can claim a certain level of correlation between given patterns of brain activity and forced lie patterns (I will come back to this definition of “lie” later”).
No, the MRI does not work that way. It provides a picture of certain neural activity for a given individual in an certain set of circumstances, but it does not read the content of your thoughts . It gives you an understanding of how the physical brain functions when someone is told to do a certain task, not what someone is thinking in general.
That it can. Just as an x-ray can show you whether or not you have a broken bone (also a bad analogy for several reasons, but allow me it’s limited use for the moment please). It is a physical picture of what is happen in that person at that moment under those conditions. MRIs can give us wonderful insights into how the brain generally functions, and it is a great diagnostic tool for people with brain tumors, Alzheimer’s or other disorders. We should continue MRI research by all means. It’ application, however, can not really extent to “mind reading”, no matter how much the current US administration ties research funding to such outcomes.
Yes, the US government funds such research and expects results. I do not have reason to doubt the integrity of the researchers involved, but given that in general in the US scientific research funding has repeatedly been slashed over the last 4-5 years, and the only “sacred cow” is anything you can tie to homeland security and fighting terrorists, researchers may have chosen to pursue a line of research the army and other government sources will fund, even if they do not find it to be the most promising or interesting applications of MRI technology. Also, they may define their terms in ways that is most likely to get them funding.
You did give two good links, however, and I will discuss them in the posts that follow.
Bev
18 years ago
18 years ago
http://www.uphs.upenn.edu/trc/langleben/emergingneurotech.pdf with (I think) balanced coverage of the main paradigms (CQT, GKT, etc.) It disentangles some of the hype, and addresses the inevitable ethical concerns.
Yes it does. Let me highlight a few points it makes:
“Given the current state of the art in neuroscience research, speculations about any impending ability to “read thoughts” of unsuspecting citizens are not realistic, and free-form mind-reading in the style described in recent films such as “Minority Report” remains science fiction (see Ross 2003). Nevertheless, there has been real, if limited , progress in finding brain correlates of certain simple memories, emotions, and behaviors, and potential applications in the social arena are foreseeable (Donaldson 2004).” (my emphasis).
There are some interesting areas of MRI research in the area called “lie detection” and some social applications may be inevitable, no matter how inappropriate and unreliable such applications may be in an inquisitorial setting. However, such applications are not always accurate, however, and we should be very skeptical of those who purport to claim they are.
Never forget the power of the purse strings and the motive behind funding certain research. The authors of that paper go on to note, “In the United States, defense related agencies have dedicated significant funds to the development of new lie-detection strategies for eventual use in criminal and terrorist investigations.” The researchers themselves may have the best of intentions. How their work may be used or misused by others is another matter.
In a section labeled “The Hype” the authors of the paper explain, “It is not surprising, therefore, that the media have spread an overly optimistic perception that these methods will soon become useful for practical application. Moreover, the proprietary “brain fingerprinting” technology has been the subject of few peer-reviewed publications, and those that exist are by Dr. Farwell and his colleagues, covering less than 50 subjects altogether and raising obvious concerns about conflict of interest."
This is exactly my point. It is not that they cannot do MRI studies. Some of these studies are interesting and important. It’s that the media has blown these studies out of proportion and made people believe that the MRI can read your mind and be used by the government to reliably tell when someone is lying. It cannot, and I doubt it will ever be able to do so. Some people within the government may have an interest in spreading propaganda based on the over generalizing and hype surrounding these studies in a “trust us , it’s science” sort of way. Others may just like to sensationalize and distort these studies for reasons of their own. It’s still not good science.
There must be careful scrutiny to protect from high potential for political rather than scientific use of these studies. The authors of that paper state, “High technology tools such as brain scans can give a persuasive scientific gloss to what in reality are subjective interpretations of the data." I could not agree more.
Bev
18 years ago
18 years ago
This study reports, “Lie was discriminated from truth on a single-event level with an accuracy of 78%, while the predictive ability expressed as the area under the curve (AUC) of the receiver operator characteristic curve (ROC) was 85%."
Table 1 gives some data that was subject to further statistical analysis. Even the authors of the study do not claim (and I doubt will ever claim) that they could get 100% under any conditions. No one who does human research claim 100% on anything.
I don’t fault their method, or their conclusion that in limited circumstances, there is a certain level of correlation between some MRI patterns and a forced “lie” as they defined it in this experiment. It is unwise, however, to generalizes these results to “lie detections” in the real word. If you read the study carefully, they are making very limited claims about what they were able to do, and they are careful to define “lie” as a forced choice.
Since you bring it up, let’s examine their method and definitions, so that we don’t fall into the trap of equivocation and overgeneralization so often exhibited by the popular press. The authors report that the participants were twenty-six right-handed male undergraduate students with a mean age of 19. Your brain continues growing throughout your life, and 19 year olds do not have “mature brains” in any sense of the word. The study controlled for brain dominance because they know that a significant portion of the population is wired differently in that respect and may not fit the pattern they hope to establish. They controlled for gender for similar reasons (there are many reported differences between the male and female brain). They should control for these factors but when reading these studies we must note that we should not generalize the results to all people based on results of a small number of select participants used in human research.
This experiment involved what they called a forced lie task. They state that a “pseudorandom sequence of photographed playing cards was presented. The series included five stimulus classes: (1) Lie (5 of clubs or 7 of spades); (2) Truth (5 of clubs or 7 of spades); (3) recurrent distracter (2 of hearts); (4) variant distracter (remaining cards 2–10, all suits); and (5) null (back of a card).” This study would be more appropriately used to detect “tells” in playing poker than in forensic applications. There are most likely different brain functions used in seeing one thing and saying another than in either making up a lie on the spot unrelated to stimuli in front of you or in repeating a lie you have memorized and conditioned yourself to mimic a believed (or to tell a story in which you have suspended disbelief).
Creating a lie, repeating a lie and reporting one thing while seeing another may all be called “lies” but they are not the same brain functions. Have you ever seen the demonstration of the “Stroop effect” which examines how difficult it is to see a color and read a word that names a different color? You can try it for yourself here http://www.apa.org/science/stroop.html. I would expect the type of suppression of “truth” reported in this study and the type of suppression of two different messages regarding stimuli (e.g. as in the Stroop effect) would be more similar than the patterns of various types of “lies” one may tell.
The authors of this study were careful to define their terms and limit the application of their results. They said in their discussion of the results, “the final common denominator of intentional deception could be conceptualized as a conscious act of suppression of information that is subjectively true. This may or may not be accompanied by a release of subjectively false information.” They acknowledge subjective truths may vary and that the release of subjectively false information may be another matter. It is the release of false information most people think of as a lie. They make no claims of being able to detect such lies. The do say, “Although lie and truth are mediated by a similar frontoparietal network, lie appears to be a more working memory-intensive activity, characterized by increased activation of the inferolateral cortex implicated in response selection, inhibition, and generation.” However, trying to remember a past event may be equally memory intensive, even when is telling the subjective truth to the best of one's ability.
Bev
18 years ago
18 years ago
The problem with using MRIs as lie detectors in real world applications is not the resolution of the MRI technology. The problem is the way the human brain works is not exactly the same form person to person, and there are many confounding variables.
Other brain functions may have the same or similar patterns to those observed in “lying” in these experiments. The authors of the study you referenced note, “Critical questions remain concerning the use of fMRI in lie detection. First, the pattern of activation reported in deception studies was also observed in studies of working memory, error monitoring, response selection, and “target” detection [Hester et al., 2004; Huettel and McCarthy, 2004; Zarahn et al., 2004]. “ In other words, just because these patterns may correlate with a forced lie, doesn’t mean they cannot correlate with other activities. The person may be suppressing the truth of stimulus presented at that moment, or checking for errors. Higher resolution won’t fix this issue because our brain uses the same areas to do more than one task.
The conditions under which the MRI is given may effect the results. The authors continue, “Second, inference in the General Linear Model (GLM) analysis of blood oxygenation level-dependent (BOLD) fMRI is based on a contrast of conditions, making the choice of a control condition critical. Thus, the difference in the attentional value (salience) of the cue (condition) intended to elicit “lie” and the control “truth” may have confounded previous studies [Cabeza et al., 2003; Guet al., 2004; Langleben et al., 2002]. “ (my emphasis). So, there are possibly confounding variables (other explanations for these results besides lying) and the conditions in which the MRI is given is critical. Higher resolution MRIs will do diddle squat to make real word conditions (especially inquisitorial settings) like lab conditions in these studies.
There are other problems with the generalization and forensic application of such studies The authors admit, “Finally, the sensitivity and specificity with which an fMRI experiment can discriminate lie from truth in the individual subject or single event level is unknown.” Higher resolution does not solve these problems.
Have you considered my point about the brain’s plasticity? (See http://books.google.com/books?hl=en&lr=&id=MMaujNPDptAC&oi=fnd&pg=RA1-PR10&sig=F7xwhWzCREELRZGr9twi15kcZbM&dq=brain+plasticity#PPR5,M1). One aspect of this is that if part of the brain is injured, the brain reroutes the tasks that used to be performed by that part to other areas. Another aspect is that we use various parts of our brain in different ways depending on our age and stage of development. Furthermore, a change in observed brain patterns (plasticity) can be induced through training (http://brain.oxfordjournals.org/cgi/content/abstract/122/9/1781). Do you want to create a group of people who have trained their brain to lie differently? Each of us has a unique brain developing under unique conditions, you cannot reliably say that any generalization about a specific brain function can be applied to an individual with no chance of error.
When studying large groups of people, you will find patterns about what areas of the brain are often used in certain tasks. This does not mean every individual will exhibit the same pattern. The brain itself is constantly adapting and changing what parts of itself is used, and for what tasks. So these 26 young men in this study used these parts of there brain for the given task of a forced lie. Within this group, there was difference in patterns depending on whether the participant used their right hand or left hand to complete the task. It follows that there may be other differences based on how the participant completes the task and any individual’s particular neurological development and condition.
Bev
18 years ago
18 years ago
Morally and philosophically you may have a point. I do not, however, believe it has a scientific basis. “Lying” is an evolutionally advantage, interconnected with creativity, empathy and abstract thinking. Without the ability to state that which is not true, we cannot think of hypothetical’s or conduct thought experiments or have fantasies. Lying is an important part of the human experience.
If the politicians really believed the “lie detectors” worked and could be used on them (as opposed to being a tool of propaganda in the war on terror) they would cut the funding immediately and ban all such research.
Ha! We know there were never any WMDs, we have evidence our leader knew that too and we are still at war. Lies and truth mean nothing next to belief , desire and ambition. Politics is called the master science for a reason. Besides, even at best version of your lie detectors, if such mechanisms existed, would only tell you whether the individual believed they were lying—a subjective truth. There is much research to show the difference between subjective truth and actual events, but I have no time to go pull it up now.
MRIs and the like are wonderful things. We should continue to do research by all means. We should especially concentrate on areas such as diagnosis and treatment of neurological disease. However, as “lie detectors” or “mind readers” we need to be very skeptical and cautious before accepting generalizations claims of scientific truth based on such data.
» More new posts: Doghead's Cosmic Bar