Using magnetoencephalography (MEG) to detect magnetic fields in the brain, researchers at the Children's Hospital of Philadelphia have found that children with autism spectrum disorders (ASDs) process sound and language a fraction of a second slower than children without ASDs.
Their findings, detailed in an article published online January 7 in Autism Research, also show that measuring magnetic signals that mark this delay may become a standardized way to diagnose autism.
Lead author Timothy Roberts, Ph.D., vice chair of radiology research at the hospital, said more work needs to be done before this technique can become a standard tool, but it shows promise that the pattern of delayed brain response may become the first imaging biomarker for autism.
Using MEG and a specially designed helmet, the team presented a series of recorded beeps, vowels, and sentences to children. As the child's brain responded to each sound, noninvasive detectors in the MEG machine analyzed the brain's changing magnetic fields.
The researchers compared 25 children with ASDs and a mean age of 10 years to 17 age-matched typically developing children. The children with ASDs had an average delay of 11 msec in their brain responses to sounds, compared to the control children. Among the group with ASDs, the delays were similar, whether or not the children had language impairments.
The delayed response suggests that the auditory system may be slower to develop and mature in children with ASDs, according to Roberts. An 11-msec delay is brief, he added, but it means, for instance, that a child with ASD, on hearing the word "elephant," is still processing the "el" sound while other children have moved on. The delays may cascade as a conversation progresses, and the child may lag behind other children.
Related Reading
MEG imaging finds where tinnitus rings in the brain, October 6, 2009
MEG reveals sound processing delays in autistic children, December 2, 2008
Study sheds light on parental instinct, February 28, 2008
Left brain picks desired sounds from cacophony, November 16, 2007
Copyright © 2010 AuntMinnie.com















![Overview of the study design. (A) The fully automated deep learning framework was developed to estimate body composition (BC) (defined as subcutaneous adipose tissue [SAT] in liters; visceral adipose tissue [VAT] in liters; skeletal muscle [SM] in liters; SM fat fraction [SMFF] as a percentage; and intramuscular adipose tissue [IMAT] in deciliters) from MRI. The fully automated framework comprised one model (model 1) to quantify different BC measures (SAT, VAT, SM, SMFF, and IMAT) as three-dimensional (3D) measures from whole-body MRI scans. The second model (model 2) was trained to identify standardized anatomic landmarks along the craniocaudal body axis (z coordinate field), which allowed for subdividing the whole-body measures into different subregions typically examined on clinical routine MRI scans (chest, abdomen, and pelvis). (B) BC was quantified from whole-body MRI in over 66,000 individuals from two large population-based cohort studies, the UK Biobank (UKB) (36,317 individuals) and the German National Cohort (NAKO) (30,291 individuals). Bar graphs show age distribution by sex and cohort. BMI = body mass index. (C) After the performance assessment of the fully automated framework, the change in BC measures, distributions, and profiles across age decades were investigated. Age-, sex-, and height-adjusted body composition reference curves were calculated and made publicly available in a web-based z-score calculator (https://circ-ml.github.io).](https://img.auntminnie.com/mindful/smg/workspaces/default/uploads/2026/05/body-comp.XgAjTfPj1W.jpg?auto=format%2Ccompress&fit=crop&h=112&q=70&w=112)



