24 March 2017
Cappella Guinigi
In many current neuroimaging investigations, more data is collected per day than was collected per experiment only a decade ago. Such dramatic increases in the volume of data have put considerable pressure on investigators to process, model, and interpret structural and functional results more efficiently through the development of sophisticated computing infrastructure and modern software toolsets. Thus, human neuroimaging is now recognized as a 'big data' challenge. In my presentation, I will 1) provide an overview of the state-of-the art computing systems we have deployed at the USC Mark and Mary Stevens Neuroimaging and Informatics Institute at the University of Southern California. The system represents among the largest neuroscience dedicated computing platforms in the world, supporting the largest repository of its kinds for human neuroimaging data, and which greatly accelerates the 'discovery science' of the brain; and 2) I will feature work done using this infrastructure which seeks to automatically extract the visual pathways from diffusion weighted imaging (DWI) data sets. Through the availability of large-scale databasing, sophisticated compute technologies, workflow software, and modern software algorithms, we believe that the pieces are in place to develop new, population-specific neuroimaging atlases for patient groups such as those with visual impairment. I will propose that, along with large-scale neuroimaging datasets, such atlases can form a basis for new studies seeking a greater understanding of visual system defects and blindness.
relatore:
Van Horn, John
Units:
MOMILAB