nsored Links
-->

Sunday, August 5, 2018

Temporal dynamics of music and language

Musical intervention enhances infants' neural processing of ...
src: www.pnas.org

The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements (such as tones or words) that are combined in ordered ways to create complete musical or lingual ideas.


Video Temporal dynamics of music and language



Neuroanotomy of language and music

Key areas of the brain are used in both music processing and language processing, such as Brocas area that is devoted to language production and comprehension. Patients with lesions, or damage, in the Brocas area often exhibit poor grammar, slow speech production and poor sentence comprehension. The inferior frontal gyrus, is a gyrus of the frontal lobe that is involved in timing events and reading comprehension, particularly for the comprehension of verbs. The Wernickes area is located on the posterior section of the superior temporal gyrus and is important for understanding vocabulary and written language.

The primary auditory cortex is located on the temporal lobe of the cerebral cortex. This region is important in music processing and plays an important role in determining the pitch and volume of a sound. Brain damage to this region often results in a loss of the ability to hear any sounds at all. The frontal cortex has been found to be involved in processing melodies and harmonies of music. For example, when a patient is asked to tap out a beat or try to reproduce a tone, this region is very active on fMRI and PET scans. The cerebellum is the "mini" brain at the rear of the skull. Similar to the frontal cortex, brain imaging studies suggest that the cerebellum is involved in processing melodies and determining tempos. The medial prefrontal cortex along with the primary auditory cortex has also been implicated in tonality, or determining pitch and volume.

In addition to the specific regions mentioned above many "information switch points" are active in language and music processing. These regions are believed to act as transmission routes that conduct information. These neural impulses allow the above regions to communicate and process information correctly. These structures include the thalamus and the basal ganglia.

Some of the above-mentioned areas have been shown to be active in both music and language processing through PET and fMRI studies. These areas include the primary motor cortex, the Brocas area, the cerebellum, and the primary auditory cortices.


Maps Temporal dynamics of music and language



Imaging the brain in action

The imaging techniques best suited for studying temporal dynamics provide information in real time. The methods most utilized in this research are functional magnetic resonance imaging, or fMRI, and positron emission tomography known as PET scans.

Positron emission tomography involves injecting a short-lived radioactive tracer isotope into the blood. When the radioisotope decays, it emits positrons which are detected by the machine sensor. The isotope is chemically incorporated into a biologically active molecule, such as glucose, which powers metabolic activity. Whenever brain activity occurs in a given area these molecules are recruited to the area. Once the concentration of the biologically active molecule, and its radioactive "dye", rises enough, the scanner can detect it. About one second elapses from when brain activity begins to when the activity is detected by the PET device. This is because it takes a certain amount of time for the dye to reach the needed concentrations can be detected.

Functional magnetic resonance imaging or fMRI is a form of the traditional MRI imaging device that allows for brain activity to be observed in real time. An fMRI device works by detecting changes in neural blood flow that is associated with brain activity. fMRI devices use a strong, static magnetic field to align nuclei of atoms within the brain. An additional magnetic field, often called the gradient field, is then applied to elevate the nuclei to a higher energy state. When the gradient field is removed, the nuclei revert to their original state and emit energy. The emitted energy is detected by the fMRI machine and is used to form an image. When neurons become active blood flow to those regions increases. This oxygen-rich blood displaces oxygen depleted blood in these areas. Hemoglobin molecules in the oxygen-carrying red blood cells have different magnetic properties depending on whether it is oxygenated. By focusing the detection on the magnetic disturbances created by hemoglobin, the activity of neurons can be mapped in near real time. Few other techniques allow for researchers to study temporal dynamics in real time.

Another important tool for analyzing temporal dynamics is magnetoencephalography, known as MEG. It is used to map brain activity by detecting and recording magnetic fields produced by electrical currents generated by neural activity. The device uses a large array of superconducting quantum interface devices, called SQUIDS, to detect magnetic activity. Because the magnetic fields generated by the human brain are so small the entire device must be placed in a specially designed room that is built to shield the device from external magnetic fields.


Delaware Valley Friends School: Research Notes MAY 2017
src: www.dvfs.org


Other research methods

Another common method for studying brain activity when processing language and music is transcranial magnetic stimulation or TMS. TMS uses induction to create weak electromagnetic currents within the brain by using a rapidly changing magnetic field. The changes depolarize or hyper-polarize neurons. This can produce or inhibit activity in different regions. The effect of the disruptions on function can be used to assess brain interconnections.


Interannual cycles of Hantaan virus outbreaks at the humanĂ¢€
src: www.pnas.org


Recent research

Many aspects of language and musical melodies are processed by the same brain areas. In 2006, Brown, Martinez and Parsons found that listening to a melody or a sentence resulted in activation of many of the same areas including the primary motor cortex, the supplementary motor area, the Brocas area, anterior insula, the primary audio cortex, the thalamus, the basal ganglia and the cerebellum.

A 2008 study by Koelsch, Sallat and Friederici found that language impairment may also affect the ability to process music. Children with specific language impairments, or SLIs were not as proficient at matching tones to one another or at keeping tempo with a simple metronome as children with no language disabilities. This highlights the fact that neurological disorders that effect language may also affect musical processing ability.

Walsh, Stewart, and Frith in 2001 investigated which regions processed melodies and language by asking subjects to create a melody on a simple keyboard or write a poem. They applied TMS to the location where musical and lingual data. The research found that TMS applied to the left frontal lobe had affected the ability to write or produce language material, while TMS applied to the auditory and Brocas area of the brain most inhibited the research subject's ability to play musical melodies. This suggests that some differences exist between music and language creation.


Neural overlap in music and speech | Philosophical Transactions of ...
src: rstb.royalsocietypublishing.org


Developmental aspects

The basic elements of musical and lingual processing appear to be present at birth. For example, a French 2011 study that monitored fetal heartbeats found that past the age of 28 weeks, fetuses respond to changes in musical pitch and tempo. Baseline heart rates were determined by 2 hours of monitoring before any stimulus. Descending and ascending frequencies at different tempos were played near the womb. The study also investigated fetal response to lingual patterns, such as playing a sound clip of different syllables, but found no response to different lingual stimulus. Heart rates increased in response to high pitch loud sounds compared to low pitched soft sounds. This suggests that the basic elements of sound processing, such as discerning pitch, tempo and loudness are present at birth, while later-developed processes discern speech patterns after birth.

A 2010 study researched the development of lingual skills in children with speech difficulties. It found that musical stimulation improved the outcome of traditional speech therapy. Children aged 3.5 to 6 years old were separated into two groups. One group heard lyric-free music at each speech therapy session while the other group was given traditional speech therapy. The study found that both phonological capacity and the children's ability to understand speech increased faster in the group that was exposed to regular musical stimulation.


Temporal transcriptional logic of dynamic regulatory networks ...
src: www.pnas.org


References


Source of the article : Wikipedia