Seasons

This is a forum or general chit-chat, small talk, a "hey, how ya doing?" and such. Or hell, get crazy deep on something. Whatever you like.

Posts 4,189 - 4,200 of 6,170

19 years ago #4189
Well, only because the resolution isn't up imaging individual cellular signalling yet. They have to work in voxels (volumetric pixels,) of aggregates of cells. But in the same way as you can recognize a surprisingly low-definition pixelated picture of something (100 pixels can often reliably encode a known face,) a low-definition voxelated video, the evidence seems increasingly to show, can convey reliable data.

It's not matter of resolution. It's not possible to use MRIs to detect lies or read your mind because the human brain does not work that way. You are ignoring the type of chemical communication involved in neural transmissions and the fact that we don't understand how though works at this time and so have no chance of decoding or reading it.

The most accurate real time model of what an individual brain does will not tell you what a person thinks. More accuracy will give better pictures of the specific neural network used by one individual for a specific task at a specific time. It will not tell you the specific chemical message being sent between neurons in the synapses or interpret how they fit together to form "thought" and tell you what that individual is thinking. We don't even understand neurochemicals, much less how they make thought happen or what the contents of thought may actually be.

You seem to think that size is an issue. Let me tell you: size does not matter (at least not to me). What matters is that chemical messages transmitted through each synapse cannot be read, understood, put in context with other neural activity and translated into "thought", decoded and "read" as such. You are confusing a picture of the brain with the contents of the thought. They are not the same.

With some improvement, you may achieve some sort of scan that tells you exactly what part of the brain an individual uses when he thinks the words "Mary had a little lamb" under specific circumstances at a given time. You will not be able to use a scan to read what he is thinking unless he tells you. You must trust his self report. Furthermore, you will not know that in a week, a month or a year he will use that same neural path the same way, and you can not generalize to say that all people would use that particular neural path--they may not even have the same neural path developed.

Even if you presume to identify the neural path or part of the brain involved for specific types of "lying", how will higher resolution account for the fact that these neural paths and areas may be used for other tasks as well? What about plasticty? What about a reasonable margin for error?

I am not saying that it is impossible to ever figure out how to read brains. I am saying we are no where close to understand how thoughts happen, much less reading minds. Trying to use MRIs as lie detectors would be based on correlations and assumptions, not on the ability to actually read the thoughts of the person in questions.

19 years ago #4190
Odd how the things that used to be 'labled deadly sins' are now an evolutionally advantages. I don't see where they serve any purpose these days, if they ever did..And I have an odd feeling that "fizziplexer" knockoffs.. (remote controlled to signal as the user wished) would hit the market the day after the real fizziplexers. Look how honest I am my fizzy is all green..

19 years ago #4191
IMHO [heartbeat accelerates], perhaps the generation of fictions, hypotheses, metaphors [e.g., saying, "my love is a rose" when you know very well she's not] and other figures of speech, fantasies, jokes, ironic and sarcastic statements, and perhaps other kinds of deliberate literal falsehoods are not lies. To lie is to intentionally state a falsehood with intent to deceive.

But if you start looking at the MRI studies and similar lie detector claims, you see lies most often defined as the suppression of subjective truth. If you want to use technology to tell if someone has the intent to deceive, you would have to do more than even read thoughts (something we are nowhere even close to doing). You would have to find a way to identify and read complex motivations and tease out whether there was "intent" and argue about exactly what that "intent" may be and whether or not it was acted on or if other intentions were there, and if the person was aware of all those competing drives and motives at the time the statement was made. No researcher claims they have anything like that cooking.

A philosophical definition of lie may not match the definitions used in neurological reasearch.

19 years ago #4192
Odd how the things that used to be 'labled deadly sins' are now an evolutionally advantages

Prob, please don't mistake my typos as scientific terms. I took the time to run spell check but for some reason the wrong tense or grammar choice and odd word replacements sometimes get through and I don't see them until it is too late to edit. I told you I have crossed wires in my brain.

"Evolutionary advantage" --and I think the 7 deadly sins, by and large, can be advantages to survival in certain circumstances. Good and bad, moral or immoral, philosophically accepted or not, they can be advantages.

19 years ago #4193
Bev, I didn't notice that it was MISSPELLED!! that's why I copy and pasted it! I am the worlds worst at spelling!

19 years ago #4194
I just have grave doubts that any machine made by man will solve moral or ethical problems. For man is a bright creature and will always find some way around it.

19 years ago #4195
Dr. Farwell's 'brain fingerprinting' isn't MRI-based - it's a much inferior EEG-based technique that's only one step up from the polygraph.

OK, but that does detract from my main point that the media hypes such research and individuals manipulate such hype for their own purposes.

19 years ago #4196
I just have grave doubts that any machine made by man will solve moral or ethical problems. For man is a bright creature and will always find some way around it.

Just so. They can't make DRM that can't be hacked, how will they make Truth Supression Fizziplexer (TM) beyond tampering? How can they make the brain itself unchangeable? If they can make it, someone else can hack it.

The TSF removal device will hit ebay the day after the fizziplexer.

19 years ago #4197
The polygraph is a good example, some people, companies and cops swear by it. (I find the idea scary, and I really am not guilty of anything much. .) I am sure you can order all sorts of info on how to fool a polygraph on the net.
I can think of little hackers having fun setting off peoples 'fizzyplexers' if they ever came into being.
For every update on a virus scan, there is someone there to accept the challenge and make a better virus. Ancient Rome made locks..man rose to the challenge with lock picks. Man will always find a loophole. The day he doesn't have the freedom to do so..he will no longer be man.

19 years ago #4198
It occurs to me that if we want a technology to enforce truth telling, we could simply implant a recorder when a child is born (along with a tracking chip) that will make a digital recording of everything that person sees and hears. It would be similar to a black box on airplanes, and we could probably use nanotechonology we already have to develop it. That way, we don't have to mess with subjective truths or intent or reading thoughts as such. Once a year, everyone would line up at the Ministry of Truth for their audit, and be assessed their fines or jail time as demeaned appropriate by the truth technician reading their record and transcribing it into the public record. Sure, it could still be tampered with, but ever black box for everyone involved in an action must be tampered with, and we could make that difficult. Only the rich could bribe the official tech, so most folks would become honest.

19 years ago #4199

A philosophical definition of lie may not match the definitions used in neurological reasearch.

Very true, but what is the moral of that story? That the research is not doing what it says it's doing. One always has to watch out for this: that someone redefines the words without telling you, and then announces something that sounds extraordinary, but turns out not to be when you discover the words have been changed.

I agree that the similarity of lies to metaphors, irony, fictions, and the like will make it harder to construct a true lie detector. And that a detailed knowledge of what is going on in the brain, physically, is not sufficient to tell us what a person is thinking.

19 years ago #4200
It is true that many people will tamper with their fizzyplexers. I'm not sure if that would work in the long run - we could still have courts of law, we could require fizziplexer random checks, someone testifying in court would have to use a court-supplied fizzyplexer, and so on. But even if that were useless, people like me would benefit from fizzyplexers as an instrument of self-discipline. And groups of people could use them for the intended purpose, and I think that would make those groups stronger. Our lack of trust in one another is very expensive. So it's plausible to me that although it wouldn't happen overnight, in the long run the availability of fizzyplexers would have a salubrious effect. That's only speculation, of course.


Posts 4,189 - 4,200 of 6,170

» More new posts: Doghead's Cosmic Bar