Research

Cytoskeletal Damage in Neurodegenerative Disease

Alzheimer’s disease affects the cytoskeleton in neuronal axons because in the late stages, the disease induced degradation of the tau protein, an intrinsically disordered protein, which binds together microtubules and stablizes them against the so called “dynamic instability” . Our research is focused on producing mechanical models of the microtubule bundles as taus are removed, to try to identify the key mechanisms for degradation and look for new phenomena.

MTBundle

Left: end view of microtubules (blue) linked by taus (red) Right: as taus are removed the microtubules can pull closer

We are also studying the conformations and mechanical properties of the tau protein itself to try to develop microscopic models of these “springs”.  This involves large scale molecular dynamics simulations on the group’s GPU computing cluster, Strider.  This work is supported by the US NSF

tau

Tau protein dimer simulated for 20 ns by Natalie Hall

Publications from this research (group authors in bold)

“Application of the G-JF discrete-time thermostat for fast and accurate molecular simulations”, Gronbech-Jensen, N. , N.R. Hayre, and O. Farago, Computer Physics Comm. 185:524-527 (2014).

“Model for competing pathways in protein aggregation: role of membrane bound proteins,” Y. Dar, B. Bairrington, R.R.P. Singh, D.L. Cox, submitted to Phys. Rev. E.

“Simulated Cytoskeletal Collapse via tau Degradation in Late Stage Alzheimer’s Disease,” A. Sendek, H.R. Fuller, N.R. Hayre, R.R.P. Singh, and D.L. Cox, submitted to FASEB Journal

ICAMLogoNSF

Amyloid Proteins for Materials Applications

With collaborators in physics (Rajiv SIngh, Gergely Zimanyi), chemistry (Xi Chen, Gang-yu Liu, and Michael Toney), electrical engineering (Josh Hihath), and Cell Biology (Ted Powers), and with the help of the Office of the Vice Chancellor for Research, we are carrying out a combined theoretical/experimental study to attempt to use amyloid proteins for applications to novel materials for energy applications.  This is a blend of protein engineering, computer simulation, recombinant protein production, nanoscale science, and cell biology.  We are in our nascent stages – watch here for new developments!

Below: synthetic amyloid fibril simulated for 20 ns on the Strider GPU cluster

cropped-twistedsquaresolenoid.png

 ucdavislogo

Strider GPU Cluster

Led by former postdoc Robert Hayre, we have through the sponsorship of ICAM and the NSF a GPU cluster specialized to production of fast molecular dynamics simulations.  We have installed the latest versions of CUDA and OpenCL platforms for GPU programming, and the latest versions of GPU adapted AMBER and OpenMM molecular dynamics packages.  The clusters includes

striderpicNewGPUNote

Left: Robert Hayre and Jesse Singh next to our first Strider Rack. Right: A newer double precision cluster built by Robert Hayre.

# Overview

The Cox/Singh GPU cluster (dubbed *Strider*) is a public cluster computing facility,
maintained on the campus of the University of California, Davis by the
Biophysics group within the Department of Physics.  This Beowulf cluster is
composed of a front-end computer and many discrete commodity-hardware compute
nodes, all connected in a local-area network (LAN) by a gigabit Ethernet
switch.  The sections below describe the basic hardware and software
configuration.

## Hardware

The initial procurement and deployment of the cluster hardware took place in
2010, with the addition of four compute nodes in 2013.

### Head Node (Front-end)

The primary tasks of the head node are to act as a shared persistent storage
hub for compute tasks and user data; and to schedule, assign, and distribute
compute tasks to the compute nodes. It has the following basic configuration:

*   Intel Core i7-920 CPU
*   2×1.5 TB, 2×1.0 TB HDD
*   3×2 GB SDRAM

### 2010 Compute Nodes (6)

*   13 Nvidia GeForce GTX 285 GPUs (across all nodes)
*   4 Nvidia GeForce GTX 570 GPUs (across all nodes)
*   Intel Core i7-920 CPU
*   1.0 TB HDD
*   3×2 GB SDRAM

### 2013 Compute Nodes (4)

*   4 Nvidia GeForce GTX 680 GPUs (per node)
*   Intel Core i7-3820 CPU
*   128 GB SSD
*   2×4 GB SDRAM

## Software

The ICAM GPU cluster currently runs *Rocks 6.1*, an open-source Linux cluster
distribution that facilitates the deployment and management of computational
clusters. In addition to the basic utilities and applications available with
Rocks, users access to several general and GPU-specific development packages
for scientific computing, including:

*   OpenMPI message-passing framework
*   CUDA Toolkit and GPU Computing SDK (NVIDIA Corp.)
*   PyCUDA / PyOpenCL (Andreas Klöckner)
*   OpenMM toolkit (SimTK.org)

Several biomolecular simulation packages are available, including
*OpenMM-accelerated GROMACS* and *CUDA-accelerated AMBER 12*.  Additional
software suites may be evaluated and installed on the server, by request.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s