Lab homeEdit page
Mike Gashler
Dr. Michael S. Gashler leads our research efforts.

Current research assistants:

Stephen Ashmore
Stephen C. Ashmore has worked on MLP-Wagging (a method for parallelizing neural network training across clusters of machines), weight bleeding (a method for mitigating the vanishing gradient problem with neural networks), and continuous convolutional network layers.
Luke Godfrey
Luke Godfrey works with the application of artificial neural networks to time series prediction. He is currently researching the use of the Fourier transform as a method for initializing network weights to model a time series in a way that extrapolates well.
Alex Cardiel
Alex Cardiel is working on methods for dynamically tuning neural network topologies and meta-parameters, so they can be trained efficiently with minimal domain expertise.
Brad Holliday
Brad Holliday is working on a method for enabling deep reinforcement learning to benefit from unsupervised training for rapid learning in environments with limited oracle availability.
Josh Burbridge
Josh Burbridge is a PhD student researching applications of machine learning in computational biology. He is currently studying how convolutional neural networks can be used in de novo genome annotation.

Former colleagues:

Rachel Findley
Rachel Findley did an honor's thesis on ant colony optimization. She investigated a novel approach to using more informative simulated pheromone to improve the rate of optimization.
Sarah Stolze
Sarah Stolze did an honor's thesis an learning to extract information from EEG devices. She proved that improved accuracy could be achieved at unspoken speech recognition by augmenting the EEG signal with video data.
Seok Lee
Seok Lee did an honor's thesis on visualizing intrinsic representations of state in dynamical systems. He presented a novel technique for efficiently training autoencoders with images of arbitrary resolution.
Pedro Garcia
Pedro Garcia worked to help enable deep neural networks to be used for collaborative filtering. These methods facilitate accurate recommender systems with new properties, such as robustness to the cold-start problem.
Austin Bittinger
Austin Bittinger worked with time-series data. He invented a novel method for training a deep neural network to initialize the weights of a recurrent neural network, bypassing many problems with local optima.
Grant Slatton
Grant Slatton wrote an honor's thesis contrasting regularization methods for deep neural networks. He found that dropout generally beats weight decay, but not under all conditions.
Paul Walton
Paul Walton worked to identify which learning algorithms and preprocessing methods are most effective for classifying EEG data. He also did some work on closed-form methods for pre-initializing the weights in deep neural networks.
Zac Kindle
Zachariah Kindle worked with methods for improving unsupervised learning. He also did some work with a cognitive architecture called MANIC.