ISLAMABAD: The human brain’s spoken language comprehension has been found by a new study to be operating with a strikingly similar mechanism to that of state-of-the-art artificial intelligence (AI) models.
The research that appeared in the journal Nature Communications made use of the method known as electrocorticography (ECoG) to observe the brain activity of individuals who were listening to a short podcast. The scientists could then identify when and where the neurons responded during the processing of words.
The results indicated that the brain did not decipher the meaning of each word instantly. Rather, the comprehension of language went through several neural layers in a manner similar to how megamodels like GPT-2 and LLaMA 2 dissect the text.
The researchers elucidated that “the brain at first attends to single words and then little by little combines the context, the intonation, and the overall meaning.” This “processing hierarchy” is very similar to the way AI models comprehend language, although they are implemented very differently.
The research has come to a conclusion that AI-generated patterning may represent some attributes of the human cognitive process that are natural and consequently, the use of AI technologies might be extended from the simple text generation to other areas.
The researchers stated that the previously unexpected similarity between human neural processing and AI language models was, in fact, more pronounced than anticipated, thus shedding light on both the fields of neuroscience and AI development.
Why Google AI’s medical advice is dangerous for health?
Earlier, a study has revealed that Google’s AI overviews are giving dangerous and incorrect information on health matters.
A study by a British newspaper has revealed that Google’s artificial intelligence-based AI overviews, which are shown at the beginning of search results, are providing incorrect and dangerous information on some health matters that can pose a serious risk to patients.
The report said that in one example, Google’s AI advised pancreatic cancer patients to avoid a high-fat diet, which experts described as medically incorrect and said that it could worsen the patient’s condition. Similarly, the information given regarding liver function tests was also flawed, due to which people suffering from serious illness may consider themselves healthy.
Health and welfare experts have warned that such misinformation can divert patients from timely treatment and, in some cases, can even prove fatal. Sophie Randall, director of the Patients’ Rights Forum, says that Google’s AI overviews appear to provide convenience, but misinformation can pose a risk to users’ health. Similarly, Stephanie Parker, a representative of the Marie Curie charity, said that people rely on the internet when they are sick or worried, and if the information is wrong, the damage is multiplied.
According to The Guardian, AI overviews’ advice on psychology and dietary habits can also prove dangerous. Google responded by saying that most AI overviews are accurate and helpful and that the company is constantly working to improve quality.
ALSO READ: How dental Health determines how long you will live: Study





