Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 4,183 - 4,194 of 6,170

18 years ago #4183
“Lying” is an evolutionally advantage, interconnected with creativity, empathy and abstract thinking. Without the ability to state that which is not true, we cannot think of hypothetical’s or conduct thought experiments or have fantasies. Lying is an important part of the human experience.

IMHO [heartbeat accelerates], perhaps the generation of fictions, hypotheses, metaphors [e.g., saying, "my love is a rose" when you know very well she's not] and other figures of speech, fantasies, jokes, ironic and sarcastic statements, and perhaps other kinds of deliberate literal falsehoods are not lies. To lie is to intentionally state a falsehood with intent to deceive.

It strikes me as a layperson [epinephrine concentration increases] that old-fashioned lie detectors were not detecting falsehood, or [pupils contract] the intent to deceive, but [skin pales] the anxiety experienced by the [blood flow in posterior cingulate gyrus increases] liar on account of fear of [size and frequency of saccadic jumps rises] detection. Is it possible [skin conductivity rises] that these new methods are also detecting something of the sort? [wrings hands] In which case they are n-n-not 'mind-readers' at all, j-just [cringing expression] 'emotion-readers.' [Output of pheromone TX-trans-op56 increases, vd-AB+ decreases.]

18 years ago #4184
[mostly re: 4178]
Bev,

Too many excellent points for me to deal with all of them until I get another dose of sleep.

Just a few quick points -

Dr. Farwell's 'brain fingerprinting' isn't MRI-based - it's a much inferior EEG-based technique that's only one step up from the polygraph.
The polygraph is claimed to be something between 70-90% accurate, and that's a problem: we do need a lot more accuracy - as close to 100% as possible. But a bigger problem is that it's very bulky and needs trained specialists to interpret the data. But with computer power advancing and miniaturising the way it is, it can't be long before automated data analysis outperforms human interpretation by a long way.

Yes, the US Government funds much of it ATM, but there are research institutes all over the world not under their control. And this isn't multi-billion $ 'particle collider' stuff. The only expensive bit of equipment (the MRI scanner itself) is standard issue in most western hospitals and medical research facilities. If the US choose to supress their research, the rest of the world will publish, commercialize and market product, if only because there's a profit to be made.

I agree that MRI-based technology is unlikely to be portable any time soon - apart from the great hazard that using it in public would constitute (watch out any passing pedestrians with steel pins or pacemakers!) But the origins of the polygraph go back to 1913. Nearly a century later, with all our technical advances, I think the time is ripe for a more reliable technology. Perhaps near infrared laser spectroscopy, perhaps microfeature recognition, or thermal imaging, perhaps a combination of strategies. None of these is particularly expensive.
And as the computer power for analysing the data carries on getting exponentially cheaper, so any hope of governmental control slips away. What's the US Government going to do when Toshiba or Samsung start marketing the things? Bomb Tokyo or Seoul? Ban their import? Like that's ever worked! Frankly I think politicians are too fixated on their own crooked schemes to appreciate the danger until it's too late.

And that's [reading your “powered up” brain] exactly what an fMRI scanner does, just at a resolution much cruder than the individual data bits (currently.)

No, the MRI does not work that way.

Well, only because the resolution isn't up imaging individual cellular signalling yet. They have to work in voxels (volumetric pixels,) of aggregates of cells. But in the same way as you can recognize a surprisingly low-definition pixelated picture of something (100 pixels can often reliably encode a known face,) a low-definition voxelated video, the evidence seems increasingly to show, can convey reliable data.
Sure, we won't be reading individual thoughts yet, and certainly not porting minds to silicon for as long as it takes to scale up another few orders of magnitude, but I see no intrinsic problem in doing so once the technology has the bandwidth to map an entire brain at a cellular level in realtime (how ever many fps that needs to be to achieve the necessary definition.) But that is a whole other rung of the ladder. And not necessarily the next one after UH™.

Do you really think it is intrinsically impossible, even given that the human race (and it's exponential technological growth,) may continue to evolve for another 100 million years? Or more? If not, then we're only arguing about timing. In the last 100 million years, we've progressed from arboreal rodents in a world dominated by reptiles to putting a man on the moon. And 99.999% of that technological advance has happened in the last few hundred years - it's massively exponential.

But if you think it is intrinsically impossible... well, I'd like to know why, because it seems unnecessarily pessimistic to me. Our technology has reached the point on an exponential curve where it's about to go asymptotic into some sort of Singularity, unless we blow ourselves up, or someone switches the universe off by mistake. And I think there's at least as good a chance that that won't happen as that it will. At least not before we figure out how to perfect a low-tech, reliable lie detector. It's a pretty modest goal (compared to some of the weird and wonderful stuff that gets suggested by people like me )

But it can at least image the whole brain at once in realtime.

That it can. Just as an x-ray can show you whether or not you have a broken bone

Except a fMRI does it in video - not just a snapshot. That's the point of the "functional". And that's a lot more relational data than a picture can ever paint.

I'll respond to your comments on the 2 articles (sorry they're pdf - I know it's a PITA) when I've resynched my body clock.

18 years ago #4185
“Lying” is an evolutionally advantage

So were tails. And before that fins.

It doesn't mean we still need it - especially if it gives an excuse for cynical politicians to start wars in a world already over-proliferated with WMDs.

18 years ago #4186
In the last 100 million years, we've progressed from arboreal rodents in a world dominated by reptiles to putting a man on the moon.

I'm not sure that is the fact most relevant to predicting our future fate. More relevant may our progress in facing problems before they reach crisis proportions (see the above remarks about global warming) and our learning to co-operate rather than compete. Certain kinds of technological sophistication tend to exacerbate these two problems rather than solve them.

18 years ago #4187
More relevant may our progress in facing problems before they reach crisis proportions (see the above remarks about global warming) and our learning to co-operate rather than compete

All the more reason why we need to stick fizziplexers (nice word BTW,) on all our politicians and make them answer the question "do you genuinely believe there are doubts about climate change, or is the oil industry paying you to allow them to carry on trashing the planet for profit?"

There's no hiding place once the fizziplexers hit eBay

18 years ago #4188
I'm inclined to agree that the fizziplexers would be a good thing. Not only because people lie to each other, but because they lie to themselves. [fizziplexer remains green and silent]

18 years ago #4189
Well, only because the resolution isn't up imaging individual cellular signalling yet. They have to work in voxels (volumetric pixels,) of aggregates of cells. But in the same way as you can recognize a surprisingly low-definition pixelated picture of something (100 pixels can often reliably encode a known face,) a low-definition voxelated video, the evidence seems increasingly to show, can convey reliable data.

It's not matter of resolution. It's not possible to use MRIs to detect lies or read your mind because the human brain does not work that way. You are ignoring the type of chemical communication involved in neural transmissions and the fact that we don't understand how though works at this time and so have no chance of decoding or reading it.

The most accurate real time model of what an individual brain does will not tell you what a person thinks. More accuracy will give better pictures of the specific neural network used by one individual for a specific task at a specific time. It will not tell you the specific chemical message being sent between neurons in the synapses or interpret how they fit together to form "thought" and tell you what that individual is thinking. We don't even understand neurochemicals, much less how they make thought happen or what the contents of thought may actually be.

You seem to think that size is an issue. Let me tell you: size does not matter (at least not to me). What matters is that chemical messages transmitted through each synapse cannot be read, understood, put in context with other neural activity and translated into "thought", decoded and "read" as such. You are confusing a picture of the brain with the contents of the thought. They are not the same.

With some improvement, you may achieve some sort of scan that tells you exactly what part of the brain an individual uses when he thinks the words "Mary had a little lamb" under specific circumstances at a given time. You will not be able to use a scan to read what he is thinking unless he tells you. You must trust his self report. Furthermore, you will not know that in a week, a month or a year he will use that same neural path the same way, and you can not generalize to say that all people would use that particular neural path--they may not even have the same neural path developed.

Even if you presume to identify the neural path or part of the brain involved for specific types of "lying", how will higher resolution account for the fact that these neural paths and areas may be used for other tasks as well? What about plasticty? What about a reasonable margin for error?

I am not saying that it is impossible to ever figure out how to read brains. I am saying we are no where close to understand how thoughts happen, much less reading minds. Trying to use MRIs as lie detectors would be based on correlations and assumptions, not on the ability to actually read the thoughts of the person in questions.

18 years ago #4190
Odd how the things that used to be 'labled deadly sins' are now an evolutionally advantages. I don't see where they serve any purpose these days, if they ever did..And I have an odd feeling that "fizziplexer" knockoffs.. (remote controlled to signal as the user wished) would hit the market the day after the real fizziplexers. Look how honest I am my fizzy is all green..

18 years ago #4191
IMHO [heartbeat accelerates], perhaps the generation of fictions, hypotheses, metaphors [e.g., saying, "my love is a rose" when you know very well she's not] and other figures of speech, fantasies, jokes, ironic and sarcastic statements, and perhaps other kinds of deliberate literal falsehoods are not lies. To lie is to intentionally state a falsehood with intent to deceive.

But if you start looking at the MRI studies and similar lie detector claims, you see lies most often defined as the suppression of subjective truth. If you want to use technology to tell if someone has the intent to deceive, you would have to do more than even read thoughts (something we are nowhere even close to doing). You would have to find a way to identify and read complex motivations and tease out whether there was "intent" and argue about exactly what that "intent" may be and whether or not it was acted on or if other intentions were there, and if the person was aware of all those competing drives and motives at the time the statement was made. No researcher claims they have anything like that cooking.

A philosophical definition of lie may not match the definitions used in neurological reasearch.

18 years ago #4192
Odd how the things that used to be 'labled deadly sins' are now an evolutionally advantages

Prob, please don't mistake my typos as scientific terms. I took the time to run spell check but for some reason the wrong tense or grammar choice and odd word replacements sometimes get through and I don't see them until it is too late to edit. I told you I have crossed wires in my brain.

"Evolutionary advantage" --and I think the 7 deadly sins, by and large, can be advantages to survival in certain circumstances. Good and bad, moral or immoral, philosophically accepted or not, they can be advantages.

18 years ago #4193
Bev, I didn't notice that it was MISSPELLED!! that's why I copy and pasted it! I am the worlds worst at spelling!

18 years ago #4194
I just have grave doubts that any machine made by man will solve moral or ethical problems. For man is a bright creature and will always find some way around it.


Posts 4,183 - 4,194 of 6,170

» More new posts: Doghead's Cosmic Bar