Carnegie Mellon University

Neocortex Active Projects

PI Institution Project
Gert Cauwenberghs University of California, San Diego Neocortex System for simulating energy-efficient large-scale spiking neural networks
Jianyi Zhang Duke University Revolutionizing Hardware Design: Harnessing Large Language Models for Automated Verilog RTL Task Completion
Tong Shen Carnegie Mellon University Toward Robust Object Tracking with Language and Vision Cues
Gongbo Liang Texas A&M University-San Antonio Mutation-Based Adversarial Attacks on Neural Text Detectors
Tengyu Ma Stanford University Improving the Reasoning Capabilities of Large Language Models
Lei Li University of California, Santa Barbara Investigating Large Language Models for Protein Sequence Design
Shwetank Singh Case Western Reserve University Open Source Smart Watch Seizure Detector
Yingzhen Yang Arizona State University Model Compression for BERT and Its Applications
Berkley Gryder Case Western Reserve University Testing the Limits of Deep Learning for the Discovery of Covalent Disrupters of Protein-Protein Interactions
Johann Rudi Virginia Tech Cerebras Accelerated Deep Neural Networks for Parameter Estimation in Scientific Models
Chung Shih National Energy Technology Laboratory Wafer Scale Engine, Field Equation, Application Programming Interface (WFA) for Material Development
Saining Xie New York University One Model to Generate Them All: Scaling Multi-modal Diffusion Transformers on CS-2
Dhabaleswar Panda The Ohio State University Exploring large-sample DL training on the CS-2
Artur Dubrawski Carnegie Mellon University Automatic Text Summarization of BioSignals
Amar ramapuram Iowa State University Physics informed distributed dynamic simulations of large-scale power grids for Optimization and Stability Analysis
Gabriel Gomes Carnegie Mellon University Multimodal learning of chemistry-aware molecular representations
Jee Choi University of Oregon High-Performance Tensor Decomposition on Massively Parallel Data Flow Architecture
John Irwin University of California, San Francisco Efficient Optimization of Docking Configurations using Sparse Convolutional Neural Networks towards Automating Ultra-Large-Scale Docking Virtual Screens
Sebastian Scherer Carnegie Mellon University Generic Visual Instance Search
Bikash Kanungo University of Michigan A Data Driven Approach to Improved Exchange-Correlation Functionals in DFT
Yunhe Feng University of North Texas Accessing Social Impacts of Emerging Deep Generative Models through Public Big Data
Franz Franchetti Carnegie Mellon University SPIRAL Code Generation for Neocortex
Zhiyong Zhang Stanford University Learning the underlying molecular distribution of chemical spaces with large models for the generation and discovery of novel molecules
ADITYA BALU Iowa State University Neural PDE Solvers on regular and irregular domains
Tao Yang University of California, Santa Barbara Fast Document Ranking with Transformer-based Neural Models
Bin Hu Los Alamos National Laboratory Explain SARS-CoV-2 Spike Protein Evolution using AI
Dirk Van Essendelft National Energy Technology Laboratory Developing field equation application programming interface for fluid dynamics applications
Chung Shih National Energy Technology Laboratory Predicting subsurface CO₂ behaviors with deep neural network and fluid dynamics simulation
Hualou Liang Drexel University Parameter Efficient Fine-tuning for Large Language Models
Chinmay Hegde New York University Towards Deep Vision-Language Models for Ecological Monitoring
Biprateep Dey University of Pittsburgh Making the Largest Map of Our Universe
Amar ramapuram Iowa State University Monitoring and Mitigating Electric Grid Instability due to Renewables Using Neural Networks
Thomas Hales University of Pittsburgh Formal Abstracts in Mathematics
Vivek Srikumar University of Utah Tensor networks and massively parallel language models on accelerator arrays
Dingwen Tao Washington State University Accelerating Large-Scale Graph Neural Networks Training on Cerebras Wafer Scale Engine
Venkatasubramanian Viswanthan University of Michigan Large scale Machine Learning force fields for metal hydride systems
Mark Bower Yale University Apply Machine Learning to Predict Antibody Drug Developability

Neocortex Former Projects

PI Institution Project
Giulia Fanti Carnegie Mellon University Privacy-preserving synthetic data from federated clients
Vincenzo Carnevale Temple University Discovery & Engineering Protein Artificial Intelligence (DEPr-AI) with Manifold-aware Protein Resuscitation (MaPR): Synthetic Bioinformatics in the Age of AlphaFold2
Wu Feng Virginia Tech ComputeCOVID19++: Accelerating Medical Diagnosis and Monitoring via High-Performance Deep Learning on CT Images
PIN-KUANG LAI Stevens Institute of Technology Apply Machine Learning to Predict Antibody Drug Developability
Kenneth Chiu Binghamton University Wafer-Scale Geometric Deep Learning on the PSC Neocortex
Bhiksha Raj Carnegie Mellon University Unsupervised labelling and learning from large audio datasets
Sreeskandarajan Sutharzan Cincinnati Children's Hospital Medical Center A novel deep learning method for discovering genetic mechanisms underlying differential gene regulation
Tushar Krishna Georgia Institute of Technology Enabling Training and Inference of Large and Sparse Deep Learning Models
Gregory Beroza Stanford University Earthquake Phase Association with Graph Neural Networks
Peiwen Cong Georgia Institute of Technology Deep learning analysis for single-molecule ligand-receptor interaction
Timothy Chung University of Pittsburgh Artificial Intelligence Framework to Predict Wall Stresses on Aneurysms
Rafael Gomez-Bombarelli Massachusetts Institute of Technology Improving predictability of anomalies in vitreous silica using uncertainty-based adversarial attack
Konasale Prasad University of Pittsburgh Discerning the complex pattern of brain networks related to psychotic disorders
Siddhartha Ghosh National Center for Atmospheric Research Exploring Wafer Scale Engine on fluid dynamics simulations for atmospheric and other applications.
Han Hu University of Arkansas Robust Fault Detection of Cooling Systems using Multimodal Fusion
Gil Speyer Arizona State University Analysis of differential dependency on large-scale RNA expression networks
Ryan Mills University of Michigan Molecular mutagenesis by biological graphs
Boniface FOKWA University of California, Riverside High-throughput and data-mining search for new rare-earth-free permanent magnetic borides
Ruoying He North Carolina State University Ocean Reanalysis Data-Driven Deep Learning Forecast
Xulong Tang University of Pittsburgh Characterizing DNN training on Neocortex
Gail Rosen Drexel University Interpretable Deep Modeling of SARS-CoV-2 Sequences
Huajie Shao The College of William & Mary Exploring Interpretable Deep Learning from Information Theoretic Perspective: Modeling and Applications.
Graham Neubig Carnegie Mellon University Large-scale Pre-training for Natural Language to Code Generation
Mark Anastasio University of Illinois Urbana-Champaign Automated sleep states classification for wide-field calcium imaging using deep learning
Yaping Liu Cincinnati Children's Hospital Medical Center Impute cell free DNA fragmentation pattern from low-coverage whole-genome sequencing
Lyle Muller Salk Institute for Biological Studies Large-scale spiking network models to explain dynamics of visual perception and working memory
Zhiyong Zhang Stanford University An Integrated Machine Learning Platform of GWAS (Genome Wide Association Study) and Epigenetics for Personalized Bladder Cancer Clinical Applications
John Galeotti Carnegie Mellon University AI Understanding of Ultrasound Scans: Semantic Segmentation and Diagnosis Trained with Simulation and Genetic/Back-Prop Hybrid Training
Arthur Lobo L2RACE CAR RNN control
John Wohlbier Carnegie Mellon University Identifying Actor Characteristics in State-Linked Information Operations Using Twitter Data and Graph Based Neural Networks
William Bradley Mirabolic Consulting Voxel Pretraining for Few-Shot Learning
George Karniadakis Brown University Training of conservative physics-informed neural networks (CPINN) to solve the incompressible Navier-Stokes equation at high Reynolds number
Jason Larkin Carnegie Mellon University Simulation and Benchmarking of Quantum Machine Learning

Active Project Details


Project Title: Neocortex System for simulating energy-efficient large-scale spiking neural networks

Gert Cauwenberghs, University of California, San Diego

Project Abstract: The goal of this proposal is for the opportunity to access the remarkable Cerebras CS-2 and the HPE Superdome servers on the Pittsburgh Supercomputing Center’s Neocortex system. The proposed hardware will be used for cutting-edge AI research, specifically for simulating energy-efficient large-scale spiking neural networks. There has been a paradigm shift in recent years from conventional artificial neural networks (ANNs) to spiking neural networks (SNNs) due to their ability to simulate more complex functionalities and work more efficiently with spatiotemporal data. However, simulating large-scale SNNs with millions of neurons and billions of synapses is a computationally intensive task that requires specialized hardware. The new Cerebras CS-2 system can revolutionize the simulation of large-scale SNNs for research applications in different fields, including healthcare, robotics, and neuroscience.


Project Title: Revolutionizing Hardware Design: Harnessing Large Language Models for Automated Verilog RTL Task Completion

Jianyi Zhang, Duke University

Project Abstract: The hardware design industry has traditionally required manual processes for the creation and verification of Verilog RTL (register-transfer level) designs. However, with the emergence of large language models, there is growing interest in utilizing these models to automate various tasks in the hardware design process. Our project aims to establish a foundation model for efficient and trustworthy hardware design through the automation of Verilog RTL tasks using large language models. We plan to pre-train a T5-like large language model on a self-curated dataset consisting of approximately 130 million Verilog code samples. We also aim to modify the T5 model architecture to enhance its performance on downstream tasks such as code summarization and refinement. To further optimize the training process, we plan to design an efficient loss function. Our framework will utilize complementary libraries including Transformers, OpenAI, Pickle, and Datasets among others, in addition to the standard PyTorch and TensorFlow distributions. Our approach has the potential to transform the hardware design industry by utilizing advanced language models and cutting-edge computational resources to create more efficient and reliable systems. By establishing a foundation model for efficient and trustworthy hardware design, our project aims to revolutionize the Verilog RTL design process.


Project Title: Toward Robust Object Tracking with Language and Vision Cues

Tong Shen, Carnegie Mellon University

Project Abstract: The emergence of large multi-modal foundation models has altered the paradigm of machine learning and computer vision. Our project aims to create cutting-edge object tracking algorithms, built upon the pioneering foundation models. By integrating multi-modal cues, we will significantly elevate tracking precision and robustness, driving transformative advancements in the field. In this project, we will start from the vision-language foundation models and fine-tune them specifically for object tracking applications. We will also devise novel multi-model feature fusion modules, employing Transformers to better harness information from the language domain.


Project Title: Mutation-Based Adversarial Attacks on Neural Text Detectors

Gongbo Liang, Texas A&M University-San Antonio

Project Abstract: Neural text detectors aim to decide the characteristics that distinguish neural (machine-generated) from human texts. To challenge such detectors, adversarial attacks can alter the statistical characteristics of the generated text, making the detection task more and more difficult. Inspired by the advances of mutation analysis in software development and testing, in this paper, we propose character- and word-based mutation operators for generating adversarial samples to attack state-of-the-art natural text detectors. This falls under white-box adversarial attacks. In such attacks, attackers have access to the original text and create mutation instances based on this original text. The ultimate goal is to confuse machine learning models and classifiers and decrease their prediction accuracy. We introduced a general framework for building the character- and word-level mutation operators. Several operators were demonstrated and evaluated using the text captions of the MS COCO2017 dataset and state-of-the-art neural language models. We believe the proposed mutation-based adversarial attacks can be used as a systematic way to evaluate the robustness of any language analysis models.


Project Title: Improving the Reasoning Capabilities of Large Language Models

Tengyu Ma, Stanford University

Project Abstract: Reasoning plays a central role in human cognition and human ability to solve problems, make decisions, and think critically. In the field of deep learning, large language models (LLMs) have recently obtained human-level performance on a variety of tasks including translation, question-answering, and summarization. Despite these successes, recent works such as GPT-4 have highlighted that LLMs still perform poorly in reasoning and mathematical problem solving. Current approaches adapt LLMs to math datasets using continual pretraining, gradient-based finetuning, or prompt engineering to elicit step-by-step reasoning. Though these methods have significantly improved LLM performance on tasks such as grade school and high school math, LLMs still make simple arithmetic errors and struggle to write complex proofs. We aim to improve the performance of LLMs on mathematical problem solving by (1) benchmarking LLMs on challenging reasoning tasks to better understand their failures, and by (2) introducing and evaluating novel finetuning-based methods for adapting LLMs to reasoning tasks. We anticipate that a key to our methods will be teaching LLMs how to plan, brainstorm, and backtrack in the process of writing mathematical arguments.


Project Title: Investigating Large Language Models for Protein Sequence Design

Lei Li, University of California, Santa Barbara

Project Abstract: Protein engineering has become a crucial research area in the fields of chemistry and biology, with the primary objective being to design proteins with desired properties. One of the significant challenges in protein engineering is designing novel protein sequences with improved properties, such as increased structural stability or enzyme activity. Recently, large language model~(LLM) becomes more and more popular and it achieves state-of-the-art performance on many natural language processing tasks. Regarding its strong ability to model discrete text sequence, it is straightforward to think about if it is possible to directly adapt LLM to protein sequence design? In this project, we aim to study this problem by guiding GPT-3 language model to design novel protein sequences with improved properties.


Project Title: Open Source Smart Watch Seizure Detector

Shwetank Singh, Case Western Reserve University

Project Abstract: Experiencing a seizure while alone increases morbidity and mortality. Studies based on national population registries have shown a 3.5 times higher odds of experiencing a sudden, unexpected death for people with epilepsy who sleep alone compared to those who sleep with a partner. This led to the development of seizure alert devices. Currently, over 20 seizure alert devices are commercially available. All are proprietary and none have undergone a clinical trial comparing performance in real time to human epilepsy doctors. The need for bespoke hardware with proprietary seizure alert devices advertises the medical condition of the user, which impacts technology adoption. Our work aims to create a clinically validated, open-source seizure alert system that can function using the accelerometers found in smartwatches. This will allow discreet deployment of models validated in a blinded clinical trial for detecting seizures.


Project Title: Model Compression for BERT and Its Applications

Yingzhen Yang, Arizona State University

Project Abstract: Vision Transformer (ViT) demonstrates that BERT-like Transformers for natural language processing can be applied to computer vision tasks and result in state-of-the-art performance. Despite achieving tremendous success, visual transformers demand much more resources than CNNs, making them difficult to be deployed on edge devices such as mobile phones and embedded devices. In this proposal, we propose two levels of compression for BERT-based models. In the bottom level, we propose to automatically search for the number of heads for each transformer block in a BERT-based model. In the higher level, we propose to search for the architecture of a BERT-based model by deciding the insertion locations of transformer blocks. We propose to compress popular unsupervised pre-training BERT models for computer vision tasks using the proposed compression methods. We will also combine visual BERT transformers with language BERT transformers for cross-modality learning tasks such as visual question answering and visual reasoning. The proposed compression method can be used to search for a tradeoff between efficiency of the model and the best interaction and connection manners between the visual and language branches.


Project Title: Testing the Limits of Deep Learning for the Discovery of Covalent Disrupters of Protein-Protein Interactions

Berkley Gryder, Case Western Reserve University

Project Abstract: P300 is a histone acetyltransferase that acts on a wide-range of proteins in the cell. Unlike its homolog CREBBP (CBP), aberrant p300 activity is often implicated as critical in driving dysregulation and disease. In the case of Alveolar Rhabdomyosarcoma (aRMS), recruitment of p300 by the fusion protein PAX3-FOXO1 (P3F) is one of the key causes of the transcriptional activation of core regulatory transcription factors driving disease progression. Prior work has shown the limited utility of catalytic inhibitors in slowing disease progression, as well as the strong impacts of p300 degradation in some diseases. However, very little previous work has explored the prospect of targeting the specific protein-protein interaction implicated in driving aRMS. This proposed work will use deep learning enhanced docking techniques to probe the interaction space between p300 and P3F, combined with recent advances in machine-learning based small molecule discovery and refinement to provide a library of promising molecules to disrupt this key interaction. The goals of this work are not only to identify a set of drugs which could be synthesized as first-in-class p300-P3F covalent disrupters, but also to build a cutting-edge pipeline for model-based drug discovery based on structural proteomics and novel chemoinformatics.


Project Title: Cerebras Accelerated Deep Neural Networks for Parameter Estimation in Scientific Models

Johann Rudi, Virginia Tech

Project Abstract: Dynamical and stochastic processes are ubiquitous in scientific applications and these are often governed by parametrized equations that are deterministic differential systems and/or stochastic processes. We recently developed parameter estimation techniques based on deep neural networks to estimate parameters of such systems. Advantages of our approach are fast parameter predictions, which, as a consequence, accelerate applications with real-time and frequent estimation demands. However, the training phase of deep neural networks poses computational bottlenecks preventing a wider applicability of deep learning-based parameter estimation. Our aim is to overcome these limitations with recent advances in hardware and algorithms tailored to neural networks.


Project Title: Wafer Scale Engine, Field Equation, Application Programming Interface (WFA) for Material Development

Chung Shih, National Energy Technology Laboratory

Project Abstract: We propose to expand the Wafer Scale Engine, Field Equation, Application Programming Interface (WFA) by developing new kernels to solve MD and MC problems. Specifically, we propose to implement the spatial-decomposition method (ref 2) for MD simulations on WSE, and algorithms performed on Graphics Processing Units (GPU) (ref 3) for MC simulations. Many kernels, such as force, energy, periodic boundary condition, nearest neighboring link list and link cells, Ewald summation and Wolf's method to conduct electrostatic interactions, excluded list to exclude specific atom pairs, various integrators to solve second-partial differential equations will be implemented. Note that although the same Newtonian equations are to be solved in both CFD and MD simulations, they are solved using completely different methods. In CFD, an Euler-type method was typically used to solve physical properties (such as temperature, pressure, and fluid velocity) in a fixed space. In contrast, in MD simulations, the Lagrangian method was used to track the moving history of each particle and compute the ensemble average over particles. For MC simulations, three different moves in MC simulations, that is, thermal move, volume change volume, and insertion and deletion of molecules, will be implemented on WSE. It is expected that implementing these MD/MC kernels on WSE will benefit a lot to various industries and academia, such as accelerating drug design, facilitating materials development for carbon capture, battery electrolyte, gas sensor, and beyond. Finally, we expect that researchers can utilize these low-level fundamental MD/MC kernels to develop more advanced simulation methods on WSE. The researchers will use Python to call these kernels to build more advanced simulation methods. This way, researchers do not need to handle and process complex operations on WSE. To start the MC project implementation, we will implement new kernels on WFA to solve the 2-D Ising model problem, which shares many significant features with the more complex atomistic MC simulations, such as the acceptance rule and periodic boundary condition.


Project Title: One Model to Generate Them All: Scaling Multi-modal Diffusion Transformers on CS-2

Saining Xie, New York University

Project Abstract: The research project "One Model to Generate Them All: Scaling Multi-modal Diffusion Transformers on CS-2" aims to develop a new generation of infrastructure for Diffusion Transformers, which achieve outstanding performance on image conditional generation tasks and showcase the remarkable scalability of transformers within the diffusion framework. A major benefit of using a transformer backbone is its adaptability to multi-modal learning, allowing for the standardization and unification of architectural backbones across various domains. The project intends to train multi-modal transformer-based foundation models using CS-2, which could revolutionize the research landscape in this area. The proposed multi-modal diffusion transformers will utilize existing transformer blocks in PyTorch and will be extended to support various modalities such as images, audio, and video by designing unique conditioning modules.


Project Title: Exploring large-sample DL training on the CS-2

Dhabaleswar Panda, The Ohio State University

Project Abstract: Efficient DL training on large sample sizes (e.g. long-sequence language modeling and large-image vision modeling) would provide wide-reaching improvements to cutting-edge applications in vision (medical, geospatial, and astronomical imaging), and language (Document summarization, extractive Q&A, DNA sequence analysis). However, storing the large samples on accelerator memory is a challenging paradigm for GPU-based HPC systems to tackle due to the limited HBM storage on GPUs. The configurable MemoryX solution provided for Cerebras systems, however, decouples accelerator memory from the accelerator. Therefore, we propose training large transformer-based and CNN-based models on large-sample datasets on the CS-2 architecture. We believe such a pairing will demonstrate the unique strengths of Cerebras hardware on a wide-reaching application domain.


Project Title: Automatic Text Summarization of BioSignals

Artur Dubrawski, Carnegie Mellon University

Project Abstract: Automated translation of data to textual narratives is commonly referred to in literature as data-to-text generation. Here, data refers to an entity that is not exclusively linguistic (eg: graphs, tables, knowledge bases). While most existing data-to-text approaches deal with tabular data and graphs, few approaches have focused on textual summaries of time-series data (Time-Series Captioning, or Time-Series Summarization). This has largely been due to the lack of paired time-series and text data. Biosignal time-series summarization has been studied in EEG-to-text and ECG-to-text. Prior work advocates for a human-in-the-loop system to generate the text summaries, stating that automated reports are too erroneous. While we note the merit in human-guided text generation, our proposed method requires far less effort from the associated medical professional. Another overarching limitation observed in all data-to-text generation methods is their evaluation metrics. Almost all methods report standard text generation metrics like BLEU and BERTScore that are simply inadequate in evaluating correctness in domains like healthcare. While many methods perform expert (human) evaluation, we note that there are no standard guidelines on how to evaluate medical text reports. Our proposed method addresses all these limitations.


Project Title: Physics informed distributed dynamic simulations of large-scale power grids for Optimization and Stability Analysis

Amar ramapuram, Iowa State University

Project Abstract: The electric power system is the nation’s critical infrastructure. It consists of millions of individual devices that are sparsely interconnected through transmission and distribution lines. As the current and power only flows along these lines, the properties of a device, such as voltage and current drawn, is only influenced directly by the behavior of the immediate neighboring devices and the properties of the interconnecting lines. Non-convex Optimization on power grid operations can be reformulated into a primal-dual dynamical system that has distributed dynamics and whose equilibrium is the optimal solution. Similarly, stability analysis in power grids is performed by simulating the non-linear dynamical equations for various disturbances and observing the evolution of voltages and currents over long time scales. We can also calculate stability metrics using the voltage evolution during the simulations to understand how close the system is to a collapse. Conventional approaches for simulating the power grid dynamics have taken advantage of the sparse nature of the power grid using techniques such as sparse solvers, etc. However, these approaches have not utilized the distributed nature of the power grid dynamics due to the lack of the right computing architecture that can leverage this property. Neocortex fills this void by having sufficient memory to hold the entire state of the system in memory while also having ultrafast communication between neighboring computing cores. We can recast the dynamic simulations into a form where the evolution of a state of a grid component (generator, motor, transmission line, etc.) is based on the various states of the neighboring components. These dynamics can be simulated in a near real-time fashion. We envision that there is likely to be a speedup of ~20x (based on the analysis of the NETL CFD solution using neocortex) compared to existing approaches for systems >10k elements.


Project Title: Multimodal learning of chemistry-aware molecular representations

Gabriel Gomes, Carnegie Mellon University

Project Abstract: Machine learning (ML) is already widely used in predicting molecular properties (ranging from energies to biological activities), designing new ones, and analyzing entire reactions. Improvement of performance of ML models in these tasks is crucial for many applications, including but not limited to drug design, material discovery, and automated synthesis planning. The most important factor in succeeding in developing such solutions is not the algorithm but rather the initial representation of the molecule. Many solutions, from SMILES strings to molecular graph representations, correctly represent the molecular structure but lack important chemical information, such as information about the electronic structure. This project aims to infuse electronic information into various types of molecular representations by constructing a joint feature space. In the first step, multiple autoregressive models will be pretrained to grasp general trends in molecular structure distributions from large chemical structure datasets. Then, structure encoders from these models will be finetuned to increase the mutual information and create a unified representation across all input data modalities. Finally, these representations will be tested on various downstream tasks. The results of this work will accelerate research in many critical areas, providing a way to infuse molecular electronic properties into multiple types of molecular representations. Requested resources will be primarily used to train Transformer-based language models, yielding good SMILES/SELFIES encoders to perform further join feature space construction process. We also plan to try training graph neural networks on Neocortex infrastructure.


Project Title: High-Performance Tensor Decomposition on Massively Parallel Data Flow Architecture

Jee Choi, University of Oregon

Project Abstract: Many critical applications, such as data mining, social network analytics, cybersecurity, and healthcare, generate massive amounts of multidimensional data as sparse tensors that can be analyzed quickly and efficiently using tensor decomposition (TD). TD algorithms for high-dimensional sparse data are challenging to execute on emerging parallel architectures due to their low arithmetic intensity, irregular memory access, workload imbalance, and synchronization overhead. In TD algorithms, each non-zero element is associated with N indices in an N-dimensional space, which are used to retrieve rows from source matrices that are used to calculate an update for a destination matrix row. This type of processing is potentially well-suited for data flow architectures, where each non-zero element is passed through a series of data accesses and floating-point operations, and the final result associated with the non-zero is accumulated in a serialized/atomic manner. This is particularly true for streaming data, which flows in continuously over time. However, due to the lack of commercially available data flow processors, only a few studies have been done on FGPAs with limited performance improvements. The CS-2 system offers large compute performance, coupled with extremely high memory and interconnect bandwidth, allowing for our algorithm to achieve unprecedented level of performance, while simultaneously offering interesting performance challenges associated with its unique data flow architecture. We propose to develop and implement novel parallel algorithms for various TD algorithms and use it to analyze both static and streaming data.


Project Title: Efficient Optimization of Docking Configurations using Sparse Convolutional Neural Networks towards Automating Ultra-Large-Scale Docking Virtual Screens

John Irwin, University of California, San Francisco

Project Abstract: Large-scale virtual screening campaigns are on the frontline of modern drug discovery. They allow quick in silico selection of the best candidate drug molecules based on the estimated strength of their interaction with a target protein, thus saving time and costs on experimental testing. Screening billions of compounds requires fast evaluation of a protein-ligand binding affinity. For example, the DOCK program’s average speed is about 1 compound/sec/core. Such a high speed is achieved by pre-computing the interaction grids in the binding pocket of the protein, which are later used to estimate binding affinities. In order to initiate a large-scale docking campaign, a researcher needs to generate grids that (1) produce correct binding conformations for known ligands and (2) predict higher scores for high-affinity molecules (“actives”) compared to any other randomly chosen compound (“decoys”). Optimization of grids typically require several weeks of skilled labor by a trained computational chemist, involving the informed variation of several parameters using heuristics, trial-and-error, and iteration. This process can be simplified and accelerated by building a sparse convolutional neural network capable of predicting the optimal parameters for the grid generation process based on the structure of a receptor-ligand complex.


Project Title: Generic Visual Instance Search

Sebastian Scherer, Carnegie Mellon University

Project Abstract: Humans can rapidly create a mental picture of a novel object by quickly figuring out it's 3D geometry. This geometry help to structure object search in complex scenes. This probably tries to develop methods that could make these mental pictures even though if the algorithm has never seen such an object or even never seen any object of the same type. This is a fundamental ability for humans, however, even the latest machine learning and computer vision models still cannot achieve this task. Here we propose to address the 3d zero-shot instance search task. By exploring the possibility of encoding 3D information of a target object, we will develop a set of new models and try to find a way to imitate the above human capabilities. We will approach this task by approaching the problem fundamentally from the ground up as a 3D problem and will leverage tools such as photo-realistic image generation, 3D reconstruction, salient object detection, and zero-shot learning. We expect this research to lead to fundamental advances in Generic Visual Instance Search. We are anticipating that the amount of data that get generated by this research is huge. And by the definiton of the underlying task, we would like to build a deep-leanring model that can take a sequence of images and process these images by plain convolutional neural networks and, possibly, visual transformer style image encoder and decoders. Considering the shear amount of data we need to handle and the size of the model, we expect that the capability of Neocortex can really boost or model development and evalution. Please see the attached PDF file for more information.


Project Title: A Data Driven Approach to Improved Exchange-Correlation Functionals in DFT

Bikash Kanungo, University of Michigan

Project Abstract: Wavefunction theory (WFT) methods and density functional theory (DFT) constitute the two most widely used ab-initio strategies for chemical and materials simulations. The WFT methods, such as configuration interaction (CI), can be tuned to arbitrary accuracy, but scale poorly with the number of electrons. DFT, on the other hand, is highly scalable and allows for a formally exact reduction of the many-electron problem to an effective single-electron problem, called the Kohn-Sham (KS) eigenvalue problem. However, this comes at the cost of making a crucial approximation for the exchange-correlation (XC) potential (or equivalently XC energy), which encapsulates the quantum many-electron interactions as a unique functional of the ground-state electronic density. Furthermore, traditional strategies for obtaining further accuracy in the KS formalism are ambiguous. The goal of this project is to alleviate the shortcomings of existing XC approximation by modeling it through machine learning, using data from WFT methods. This approach entails two distinct steps. First, we use accurate groundstate densities from WFT methods and perform an inverse DFT calculation to obtain the exact XC potential that yield the WFT density. Subsequently, we use the density and XC potential pairs from multiple atoms and molecules as training data to model the XC functional---the functional dependence between the XC potential (or energy) and the density.


Project Title: Accessing Social Impacts of Emerging Deep Generative Models through Public Big Data

Yunhe Feng, University of North Texas

Project Abstract: Deep generative models have been becoming one of the most controversial artificial intelligence techniques in academia and industry in recent years due to their abuse usage and unexpected societal impacts. DeepFake, one of the deep generative models that can be used to create synthetic images and videos about people, has played a significant role in fake news, misinformation, blackmail, and pornography, causing a lot of information chaos on the Internet. Because of these unpredicted risks and potential biases, state-of-the-art text-to-image generative models (e.g., OpenAI's Dalle 2 and Google's Imagen and Parti) have been announced, but they are not accessible to the public. However, independent researchers replicated these announced text-to-image generative models by adopting the published underlying training approaches. They also open sourced their pre-trained models and provided online trial services, making them available to everyone. Although the models replicated by independent researchers are not as powerful as the authentic ones due to smaller training datasets and limited GPU resources, people enjoy playing them and sharing their findings online. For example, in June 2022, around 50,000 images were generated by online Dalle Mini (a replication of OpenAI's Dall-E) service per day, and Dalle Mini went viral on social media. In this proposal, we use social media as a lens to investigate and estimate the potential amplified biases, potential anti-society risks, and negative societal impacts and threats of the unreleased generative models by conducting a systematic analysis of the numerous public postings from multiple social media platforms including Twitter and Reddit. More specifically, we will collect a sizeable multi-modal dataset containing the generated images and text used for image generation from social networks. Then, we will load and fine-tune the pre-trained image object detection models and large language models (LLM) to detect the offensive elements in these generative images and study the relationship between the input text and the output images. Thus, we will gain a deep understanding of how ordinary people use these powerful generative models and what potential negative impacts the models may bring. We think the WSE storage and GPU computing resources will significantly facilitate our research for the collected data storage and deep learning model training.


Project Title: SPIRAL Code Generation for Neocortex

Franz Franchetti, Carnegie Mellon University

Project Abstract: The SPIRAL system builds on 20 years of research by PIs Franchetti, Moura, Hoe, and Low, and 9 years of commercial R&D in the SPIRAL effort. SPIRAL has demonstrated across a wide range of hardware architectures that it is able to produce software that outperforms the best human programmers, and was designed to automatically deliver the performance of the best hand-tuned code on a new target platform on day one. SPIRAL has successfully targeted single core/multicore/manycore CPUs, GPUs, DSPs, the Cell BE, Xeon PHI, FPGAs, clusters, up to 128k cores on BlueGene/L/P/Q, the 20 K computer, and in pre-silicon settings (IBM Cell BE and BG/Q, and Intel AVX and Xeon PHI). Work in DARPA BRASS is enabling SPIRAL to run as a just-in-time compiler, and we are extending SPIRAL to support CNN/DNN kernels, graph and sparse matrix algorithms in a GraphBLAS related effort. In DARPA PERFECT we used SPIRAL to program and configure the HAMLeT memory side data reorganization unit we developed. Our work on MEALib in PERFECT demonstrates how standard C++ source code can be interpreted as an embedded DSL program, compiled with SPIRAL, and run on advanced memory-side accelerators without any change to the source code, leading to 150x performance gain and 8000x power efficiency gain. SPIRAL is also used as code generation backend in the DOE ExaScale effort FFTX and SpectralPack and the DARPA DPRIVE program to target homomorphic encryption. In this effort we plan to target the WSE system with SPIRAL to explore how to make our code generation technology compatible with the WSE sysytem.


Project Title: Learning the underlying molecular distribution of chemical spaces with large models for the generation and discovery of novel molecules

Zhiyong Zhang, Stanford University

Project Abstract: Learning and revealing the underlying complex molecular distributions of general chemical spaces, or specialized chemical spaces, for example, of special properties for drug development and discovery and other essential applications, can be of fundamental theoretical and practical importance. There has been increasing evidence that deep generative models of molecules, trained on relevant datasets, can be used to search through chemical space for novel molecules. It is our hope that, when trained on adequately large datasets and large models, it is possible to develop a generalized model for the latent space representation of the underlying complex molecular distribution of the chemical space in general. Such a model of the general distribution of the chemical space then could be adapted, refined, and specialized for subspaces of desired chemical properties via transfer learning on specialized datasets. Large datasets and large models mandate novel hardware and efficient and software algorithms for rapid iterations of the training and tuning processes. In this work we will port, further develop, and scale up a deep generative model, GENTRL, that we have been porting and testing during the early access phase of Neocortex. We will use increasingly large dataset and increasingly complex NN architectures to tune, refine, and benchmark the NN model against existing datasets and molecules that are well characterized experimentally. In addition, we will further develop and apply a novel algorithm, recently developed inspired and based on conservation of energy/cost in the hypersurfaces of the cost functions of the NN model, for optimization of NN parameters.


Project Title: Neural PDE Solvers on regular and irregular domains

ADITYA BALU, Iowa State University

Project Abstract: Neural network-based approaches for solving partial differential equations (PDEs) have recently received special attention. While most of these approaches are point-based (implicit neural representation), very few approaches deal with parametric PDEs (i.e. a diverse set of boundary/initial conditions or a family of PDEs). Further, a large majority of neural PDE solvers only apply to rectilinear domains and do not systematically address the imposition of Dirichlet/Neumann boundary conditions over irregular domain boundaries. Over the past couple of years, we have developed a series of neural PDE solvers that use Finite element basis functions to obtain the loss function for a particular PDE and can apply boundary conditions naturally. We extended this to more complex domains and large domains (Mega voxel domains). Further, we recently proposed to neurally solve partial differential equations over domains with irregularly shaped (non-rectilinear) geometric boundaries. the key technical ingredient to realizing this model is a novel approach for identifying the interior and exterior of the computational grid in a differentiable manner. We apply for this proposal to be able to extend these neural PDE solvers to be able to go to gigavoxel domains (which cannot be achieved in modern CPU and GPU HPC clusters). We hope to use the Cerebras system to be able to scale this to large-scale problems.


Project Title: Fast Document Ranking with Transformer-based Neural Models

Tao Yang, University of California, Santa Barbara

Project Abstract: This proposal studies efficient optimization for transformer-based neural document ranking. Recently transformer- based neural ranking with deep contextual models has been extensively studied in delivering a high relevance score for top k search of text documents. The main challenge is that using such a model to rank or re-rank documents is extremely expensive during the runtime inference. This proposal is focused on developing efficient solutions to perform transformer-based re-ranking computation during ad-hoc query processing. The evaluation process will use public datasets to assess the effectiveness of the proposed techniques in terms of relevance and efficiency.


Project Title: Explain SARS-CoV-2 Spike Protein Evolution using AI

Bin Hu, Los Alamos National Laboratory

Project Abstract: Viral pathogens target 'receptor' proteins on the surface of host cells to initiate infection. Recognition of the receptor is coordinated by a viral surface protein, and the strength of binding, determined by the biochemical properties of the amino acid sequences of the viral and host proteins, often dictates the course of disease. Because of the impact on viral fitness during immune host responses caused by viral infection, there is constant evolutionary selective pressure asserted on these viral surface proteins, where mutations that provide a fitness advantage to the virus will outcompete and spread more rapidly than other variants. Because of their accessibility, viral surface proteins are also common targets for vaccines and therapeutic antibodies. Mutations in viral surface proteins can perturb antibody binding and lead to increased infection or even escape from vaccine and therapeutic regimes. Evidence points to this being the case with some of the more recent lineages of SARS-CoV-2 that continues to spread within the US. We have developed a machine learning (ML) approach to study deep mutational data and resulting phenotypes of receptor binding domain variants of the SARS-CoV-2 Spike protein. This ML model can accurately predict the expression level of viral proteins, including those that display combinatorial mutations, as well as predict their affinity to the human ACE2 receptor. We plan to further develop this model and test several alternative model architectures using natural language processing and graph models to increase the explainability of the model. The ultimate goal of this work is to develop explainable AI methodology for studying viral evolutions and associated biothreats.


Project Title: Developing field equation application programming interface for fluid dynamics applications

Dirk Van Essendelft, National Energy Technology Laboratory

Project Abstract: The National Energy Technology Laboratory and Cerebras Systems Inc. are developing a domain specific Wafer scale engine Field equation Application programming interface (WFA). It is designed to solve field equations on the Wafer Scale Engine (WSE). Initial results showed the outstanding performance of WFA in speed, power consumption, and carbon footprint. This project aims at further maturing the WFA and investigating the applications in computational fluid dynamics and other related fields. By maturing the WFA, users will be able to form and solve field equations with easy-to-use and high-level Python interface while maintaining high performance of the WSE.


Project Title: Predicting subsurface CO₂ behaviors with deep neural network and fluid dynamics simulation

Chung Shih, National Energy Technology Laboratory

Project Abstract: The National Energy Technology Laboratory (NETL) leads US DOE multi-lab SMART (Scientific Machine learning to Accelerate Real-Time decisions) Initiative, with one of the goals aiming at near real-time forecasting of CO₂ behavior after injection in subsurface storage reservoirs. One of SMART tasks is to explore and research advanced AI/ML techniques and computational technologies. In this proposed work, NETL’s SMART team aims at using CS-2 with the team’s AI/ML models and leveraging NETL’s Wafer scale engine Field equation Application programming language (WFA) to explore HPC-type solutions for subsurface. There is a separate PSC proposal from NETL to focus on maturing the WFA.


Project Title: Parameter Efficient Fine-tuning for Large Language Models

Hualou Liang, Drexel University

Project Abstract: Large language models have become the mainstay in natural language processing (NLP). These models entail high costs in terms of storage, memory, and computation time and it has motivated a large body of work on model compression to make them smaller and faster to use in real-world applications. One attractive solution to this problem is parameter efficient fine-tuning, in which we need only fine-tuning a subset of the model parameters. However, the question as to which subset to be trained to achieve the best result remains unanswered. In this project, we have initially analyzed different components in the commonly used BERT model to see which one undergoes the most change after fine-tuning. We show that output of LayerNorm changes the most among other model components when fine-tuned with Microsoft Research Paraphrase Corpus (MRPC), as one of the of the General Language Understanding Evaluation (GLUE) tasks. We further show that by only fine-tuning this component can have competitive performance to full fine-tuning and other parameter-efficient fine-tuning approaches. Moreover, we use Fisher information to assess which parameters in LayerNorm are most important in order to have even less parameters involved in parameter-efficient fine-tuning. After getting the resources, we plan to test our hypothesis on the rest of the GLUE tasks before we apply the model to a real-world application of the drug labeling data.


Project Title: Towards Deep Vision-Language Models for Ecological Monitoring

Chinmay Hegde, New York University

Project Abstract: A key upcoming challenge in preserving the Earth's biosphere will be the continuous monitoring of a variety of animal and plant species present in given ecosystems. Artificial Intelligence (AI) can play an important role for solving this challenge, and very recent advances in vision-language models (VLMs) have the promise to build powerful human-interpretable AI models that can be deployed on a wide array of datasets. However, unlike standard deep neural networks, these models are extremely challenging to train on workstations or even small clusters. This Neocortex project will serve two purposes. First, it will help my lab build the first open-source VLMs for image classification and object detection for ecological monitoring. These will be trained on large-scale image datasets (such as iNaturalist) and will specifically be fine-tuned for robust species tracking and monitoring. Second, all software developed within this project will be open-sourced and will serve as stepping stones in the intersection of machine learning and computational sustainability.


Project Title: Making the Largest Map of Our Universe

Biprateep Dey, University of Pittsburgh

Project Abstract: Maps of our Universe help us study how the contents of the universe evolved with time, how galaxies formed and developed, and pinpoint the location of short-lived astrophysical events like supernovae or sources of gravitational waves. To make such maps, astronomers measure a quantity called redshift which is a proxy for galaxy distance. However, high precision measurements of redshifts can only be done for a tiny fraction of galaxies for which we have images. Because of this limitation, it is necessary to infer redshift information from imaging data alone; the resulting measurements are called photometric redshifts. Precise estimates of photometric redshifts, along with associated uncertainties, are key to astronomical research for the coming decade since a majority of the data sets that will be available to the astronomy community will come from large-scale imaging only astronomical surveys and depend heavily on photometric redshifts to measure galaxy distances. We propose to use one such data set of over 32 million galaxy images called the DESI Legacy Imaging Surveys to pre-train a vision transformer-based self-supervised model which will learn a latent representation of the galaxy images. We then plan to fine-tune the model using a small number of known galaxy redshifts from the Dark Energy Spectroscopic Instrument (DESI) to finally infer the redshifts to the galaxy for which we only have images. This map of our Universe will enable the myriad of scientific cases previously mentioned.


Project Title: Monitoring and Mitigating Electric Grid Instability due to Renewables Using Neural Networks

Amar ramapuram, Iowa State University

Project Abstract: The electric power grid is an incredible feat of engineering and is essentially a very large interconnected machine whose operation is dictated by complex non-linear equations over various time scales. The highly non-linear nature of the grid demands that it should be operated in a very limited operating range in order to ensure robustness and stability to unexpected events. Conventionally, the power system operators used engineering judgment, offline planning and operation simulations in conjunction with linear analysis to operate the grid and compensate for the non-linearity by setting conservative thresholds. More recently, the increasing adoption of variable uncertain renewable power sources (wind power, solar, etc.) is challenging some of the core operational assumptions made during offline analysis. In this project, we will address the problem of monitoring and mitigating grid instabilities by leveraging the power of machine learning to learn a function that can predict the margin to instability given load demand and generation injection. This will enable the grid operators to monitor the grid in near-time and will be used to devise control schemes to mitigate an emerging instability in the grid.


Project Title: Formal Abstracts in Mathematics

Thomas Hales, University of Pittsburgh

Project Abstract: A major goal of the international math community is to obtain tools for the automated processing and transformation of mathematical documents. These tools do not currently exist in a satisfactory form and extensive research is oriented towards improving on this. The Formal Abstracts Project aims to provide mathematicians with software tools for stating their research results in a human/machine readable format amenable to formal verification. In order to achieve this goal, the Formal Abstracts Project has recognized the need for (1) a comprehensive vocabulary of mathematics, in order to state research results, and for (2) improved automated reasoning tools to aid in processing and formally verifying those statements. Using the startup allocation #TG-DMS190028, we have (1) applied techniques from Natural Language Processing (NLP) to produce a dataset of labeled definitions extracted from the entire arXiv mathematics corpus, and (2) applied deep learning to accelerate SAT solvers, a type of automated reasoning tool. We consider these successes to be promising first steps towards our ultimate goal. We propose a strategy to continue work in this vein, including a machine learning methodology that uses well-established techniques in NLP. This methodology consists of a detailed strategy to obtain and process the relevant data, and has been thoroughly peer-reviewed by the Mathematical Knowledge Management (MKM) community. Our group has received a major grant from the Alfred P. Sloan Foundation (G-2018-10067) to develop software and services for transforming mathematical results into formally structured data that machines can read, process, search, check, compute with, and learn from as logical statements. This puts our group in an ideal position to create this much needed resource.


Project Title: Tensor networks and massively parallel language models on accelerator arrays

Vivek Srikumar, University of Utah

Project Abstract: The gigantic size of recent transformer language models like GPT3 is due to the use of large dense weight matrices, too large to fit even on a wafer-scale engine like the Cerebras systems. There has been increasing recent interest in exploring more compact representations for weight matrices using various factored representations using tensor networks. In this project, we propose to explore two complementary research questions: 1) Can we develop compact and accurate factored language models that can fit within the CS-1 and achieve comparable accuracy to much larger models using the standard transformer architecture? 2) Can we develop effective customized mapping/scheduling techniques to enable high performance on the Cerebras CS-1 for training and inference with factored tensor networks?


Project Title: Accelerating Large-Scale Graph Neural Networks Training on Cerebras Wafer Scale Engine

Dingwen Tao, Washington State University

Project Abstract: Graph Neural Network (GNN) is a promising approach to efficiently learn from graph data structures, having shown advantages in many critical applications such as chemical reaction prediction, traffic state prediction, and text classification. This project is to investigate how to accelerate large-scale Graph Neural Networks by leveraging Cerebras Sparse Linear Algebra Compute Cores and our proposed new graph re-ordering algorithm. This project is well aligned with our NSF-funded large-scale DNN training project (https://www.nsf.gov/awardsearch/showAward?AWD_ID=2034169).


Project Title: Large scale Machine Learning force fields for metal hydride systems

Venkatasubramanian Viswanthan, University of Michigan

Project Abstract: Machine Learning has enabled the prediction of various material properties from formation energies, HOMO/LUMO levels to atomic energies and forces. The increasing number of material and molecule datasets available present an opportunity to train Machine Learning models on datasets larger than typically used in materials science including larger sets of descriptors and more model parameters. Computational cost for training typically limits the dataset size and model size. In this work we train Machine Learning models to predict scalar properties of metal hydrides, materials which have been shown to have high temperature superconducting properties as well as molecular datasets such as QM9 important in various chemical processing industries. NeoCortex will allow us to push the limits of the sizes of training sets and models at record training speeds in an attempt to beat the state of the art accuracy on scalar properties.


Project Title: Apply Machine Learning to Predict Antibody Drug Developability

Mark Bower, Yale University

Project Abstract: Recently, we have developed the Network Parameter Outlier (NPO) algorithm, that uses graphical clustering methods to cluster biological event data (e.g., action potentials, sharp-wave ripples) in O(n) time, improving on current methods that require O(nlog n) time. Biologically-relevant events are detected and stored as a graph with connections weighted by corresponding correlation coefficients. A graphical clustering technique (Louvain) partitions the graph into clusters. This process is repeated in a moving-window fashion with new vertices being added and old vertices being dropped. Cluster labels are passed to succeeding windows, generating a consistent clustering across an arbitrarily large data set. Because the windowed-data graph size remains constant, processing time is bounded, which allows the total clustering time to increase linearly with increasing data size. The optimal parameters regarding both computational time and overall performance, however, are unknown due to memory limitations of conventional CPUs. We propose to use the large-memory model of Neocortex to allow graph sizes that are much large than can be supported by current computing hardware. The algorithm has been wrapped in a Singularity container linked to a MySQL database (also running in a Singularity container) to allow portable computation.

Former Project Details


Project Title: Privacy-preserving synthetic data from federated clients

Giulia Fanti, Carnegie Mellon University

Project Abstract: Synthetic data refers to randomized data that is drawn from the same (or a similar) distribution to an underlying ground truth dataset. In recent years, synthetic data has become remarkably high-quality, due to the growing successes of deep generative models. However, synthetic data from deep generative models is typically trained over centralized datasets. In practice, one important use case for synthetic data is to understand data patterns at distributed clients (e.g., in a federated learning (FL) setting). Our goal in this project is to design a method for generating synthetic data at a central server, from the data of many distributed, privacy-conscious clients. We propose to achieve this by first computing a privacy-preserving estimate of the mean and covariance of client embeddings, under a pre-trained embedder, as described in our upcoming paper at the TrustML Workshop at ICLR 2023. Then, we aim to generate synthetic data at the server side that matches the privately-estimated embedding distribution of the client data, by decoding a private embedding into a full sentence. In doing so, we wish to explore whether federated client fine-tuning can be eliminated in some cases, in favor of fine-tuning on privately-generated synthetic datasets. Our proposed pipeline will require fine-tuning (or possibly retraining) standard large language models, such as BERT or T5 on benchmark datasets.



Project Title: Discovery & Engineering Protein Artificial Intelligence (DEPr-AI) with Manifold-aware Protein Resuscitation (MaPR): Synthetic Bioinformatics in the Age of AlphaFold2

Vincenzo Carnevale, Temple University

Project Abstract: Though separated in some cases by over 1B years of evolutionary time, divergent protein families may share remarkable sequence and structural patterns. Although the patterns are complex, and although the evolutionary parameters that generated them are ambiguous, the patterns are detectable, given sufficient protein data. Therefore, a model trained on sufficient data could in principle extract the parameters from the patterns, and then parameterize itself to generate synthetic proteins that are statistically indistinguishable from those generated by natural evolutionary processes, but in a controllable way. For the first time, sufficient data are available to train such a model. We propose Discovery and Engineering Protein Artificial Intelligence (DEPr-AI), a BERT-like autoencoder neural network (AENN) generative protein model trained on evolutionary patterns in protein sequence and structure data. Until just recently, sufficient volumes of protein structure data were unavailable, and so prior generative protein modeling methods focused primarily on sequence data. Here, we propose to leverage the rich information contained in the sequence-structure map that was previously unaccounted for. The recent release of AlphaFold2 (AF2) by Google's DeepMind, which can predict structures with atomic precision on par with experimental techniques, makes our proposed work newly feasible and extremely timely. The first part of this research proposal is to use Neocortex to generate hundreds of thousands of protein structures using AF2, a challenge that would take days or weeks of GPU on current XSEDE resources such as Bridges2, but could take days, hours, perhaps even minutes on Neocortex. After AF2 structure generation on Neocortex, we will extend our prior generative protein sequence modeling efforts to characterize the relationship between protein sequence-structure and conformational dynamics using DEPr-AI, which employs a novel joint embedding approach of sequences, merged with their corresponding structures, into a paired representation. By embedding these joint sequence-structure protein entities into the latent space of an AENN during training, DEPr-AI can learn the sequence-structure-function continuum from the evolutionary patterns in the data, and encode the continuum into the topology of a fitness landscape with improved accuracy and interpretability over current methods. We propose another method, Manifold-aware Protein Resuscitation (MaPR), which DEPr-AI can use to interpolate new synthetic proteins from the latent space by \"resuscitating\" them along high probability geodesic paths between known proteins. With MaPR, DEPr-AI, and AF2, all running on Neocortex, we will deliver breakthroughs in protein discovery, engineering, and analysis that were technologically infeasible until now. Further, we have already begun coordinating with experimental collaborators, who will verify that our synthetic proteins have the features predicted by DEPr-AI.



Project Title: ComputeCOVID19++: Accelerating Medical Diagnosis and Monitoring via High-Performance Deep Learning on CT Images

Wu Feng, Virginia Tech

Project Abstract: ComputeCOVID19++ builds on our existing work with ComputeCOVID19+, a CT-based framework that significantly enhances the speed and accuracy of diagnosing and monitoring COVID-19 (and its variants) via a deep-learning network for CT image enhancement called DDnet, short for DenseNet and Deconvolution network. In this work, we propose to create a new algorithm that is synergistically co-designed with the Cerebras CS-1 neuromorphic hardware and its associated neuromorphic software in the integrated HPE Superdome Flex and Cerebras CS-1 system. As such, we seek to improve the regularization and specificity of our DDnet in ComputeCOVID19+, which enhances the quality of any given CT scan, and then map the sparsity of our model onto Cerebras CS-1 to reduce the training time of DDnet. In addition, we seek to propose and validate the efficacy of a new set of metrics that can then be used as a guide to quantify the sparsity of any deep-learning model for different types of layers such as convolution and fully connected layers.



Project Title: Apply Machine Learning to Predict Antibody Drug Developability

PIN-KUANG LAI, Stevens Institute of Technology

Project Abstract: The number of monoclonal antibody (mAb) drugs in clinical trials or approved for use has increased rapidly in recent years with 97 drugs currently approved by the U.S. Food and Drug Administration (FDA) or the European Medicines Agency (EMA) as of Aug 2020. In addition to successful antibody binding to the target to stimulate biological responses, the developability properties of mAbs such as the feasibility of their manufacture, stability in storage and absence of off-target stickiness are essential to new drug development. In fact, attrition of therapeutic candidates during clinical development is the major factor in high development costs. However, the developability profiles of antibodies are difficult to assess in the early-stage discovery and candidate screening due to limited number of molecules, material availability and lack of physical understanding. Therefore, developing predictive tools that can evaluate the developability of antibody as early in the discovery/development process is desired. These include low aggregation rate and low viscosity. Previously, we have developed different computational tools based on molecular dynamics (MD) simulations or use features extracted from MD simulations to develop machine learning model to predict antibody aggregation and viscosity. Two of the key descriptors are called spatial aggregation propensity (SAP) and spatial charge map (SCM), respectively. The calculation of SAP and SCM requires to build homology model from antibody sequences and run MD simulations to get ensemble average. This step is very time consuming and needs supercomputers to do the tasks. The goal of this project is to apply neural networks to train MD simulation results using antibody sequence as inputs. The ML model thus obtained will speed up the calculation of SAP and SCM scores and facilitate antibody screening in the early-stage design.



Project Title: Wafer-Scale Geometric Deep Learning on the PSC Neocortex

Kenneth Chiu, Binghamton University

Project Abstract: Neural networks, especially deep neural networks, have seen remarkable success in the past decade on a variety of problems. Many problems, however, are naturally modeled as a graph problem, for which traditional neural networks are not well-suited. This has led to the development of graph neural networks (GNN). GNNs or Message Passing Neural Networks are a set of deep learning algorithms based on message passing or graph convolutions, and are designed for supervised and unsupervised learning on graph structured data. The message passing or convolution operation is analogous to the filter operator from Convolutional Neural Networks (CNN) over neighboring pixels. CNNs can be considered grid-like or lattice graphs with a consistent number of neighbors. GNNs act over a more generalized set of operations, and thus can have an arbitrary number of neighbors and also varied kernel operations. As a result, kernel operations vary depending on the data itself, and require generalized sparse scatter/gather communication over the data features. We will use a customized CSR/CSC format with custom kernels to perform efficient reduction across neighboring graph vertices. We will co-develop our implementation with three applications. One application will be inverse molecular design. Molecules are naturally represented as graph structures, and deep learning on molecules has been hugely successful in many domains such as materials science, and drug discovery. Successful incorporation of deep learning in the molecular design loop can result in development of exotic materials for energy storage, energy generation, and combat climate change. Structure-property prediction is an important part of the design of new materials. Our other application will be predicting events in the world’s most popular multiplayer video game: League of Legends. Using high-resolution large-scale data of thousands of played games, we will learn interactions in complex dynamic graphs that update in real-time. Dynamic graphs such as these will be a case study for performing accelerated deep learning on real-time graphs. Our third application will be to identify state-sponsored disinformation by online interaction graphs.



Project Title: Unsupervised labelling and learning from large audio datasets

Bhiksha Raj, Carnegie Mellon University

Project Abstract: Acoustic scene analysis is fast becoming a standard technology, expected in devices such as smartphones. But the latest solutions are limited by the availability of labelled training data. In this project, we propose to automatically label a very large quantity of audio data, to generate currently the largest dataset for use by the research community. This will, however, require the development of algorithms that can iterate over such large amounts of data and iteratively refine their automatically generated labels. On traditional machine learning hardware such as Graphical Processing Units (GPUs), we expect our approach to take several weeks or more of compute time for a single pass through the dataset, leading to unreasonable latencies in research (and development) time. We believe that the neocortex system can reduce the iteration time by orders of magnitude, and enable us to optimize our unsupervised inference algorithms and put out labelled data resources that will be of high value, not just to us, but to the research community at large.



Project Title: A novel deep learning method for discovering genetic mechanisms underlying differential gene regulation

Sreeskandarajan Sutharzan, Cincinnati Children's Hospital Medical Center

Project Abstract: Gene regulation is a fundamentally important molecular process that is required for all known forms of life. Gene regulation is defined as the processes underlying the activation or repression of gene expression levels. Transcription factors (TFs) are proteins that play key roles in gene regulation. The human genome encodes >1,600 TFs, each of which plays an important gene regulatory role in particular contexts. TFs act by recognizing short DNA sequences in the genome. Upon doing so, they recruit other proteins to ultimately influence gene expression levels. In this sense, TFs are the primary molecules responsible for interpreting the genome. Our lab and many others are currently engaged in understanding this complex “regulatory grammar”, with the ultimate goal of predicting gene expression levels from DNA sequence alone. Achieving this goal would enable a thorough understanding of genome function, and how genetic variation contributes to phenotypes and diseases. Recent advances in Deep Learning methodologies and capabilities are quickly enabling major progress towards this goal. In this study, we propose to leverage the power of Deep Learning to study a particularly important question in regulatory genomics – what DNA sequences underlie differential gene regulatory mechanisms that occur due to differential cellular conditions?



Project Title: Enabling Training and Inference of Large and Sparse Deep Learning Models

Tushar Krishna, Georgia Institute of Technology

Project Abstract: The end of Moore’s Law has necessitated a need for domain-specific hardware accelerators for efficiently running High Performance Computing (HPC) workloads. The Neocortex platform provides access to the Cerebras wafer-scale engine which is an accelerator that supports dataflow execution. The focus of this proposal is to develop and study efficient algorithms for key linear algebra kernels used in HPC workloads. Specifically, we will target Graph Neural Networks at the target workload, that include dense and sparse matrix multiplications. The PI is also part of the Department of Energy ARIAA center and will leverage ongoing research in key tensor kernels from the center and identify acceleration mechanisms using Neocortex."



Project Title: Earthquake Phase Association with Graph Neural Networks

Gregory Beroza, Stanford University

Project Abstract: In this work we propose a new Graph Neural Network (GNN) architecture for earthquake phase association, in which we process streaming datasets of estimated seismic wave arrival times (known as “picks”), determine the number and location of earthquakes in a time window, and associate picks to earthquake sources. We train the GNN through supervised learning with synthetic pick datasets, for which ground truth is known, and for which there is high variability and noise (false picks) in the datasets. The network is not trained for a particular configuration of stations, rather it is trained to allow variable: network geometry, numbers of stations, station qualities, and pick rates. By frequently including closely overlapping events in space and time in the training data, the GNN learns to untangle overlapping events. As a mathematical function, the GNN maps sets of sets (sets of discrete picks, on each station) to a continuous, smooth, bounded-prediction of source-likelihoods in space-time, similar to the traditional back-projection (BP) mapping; however, it greatly suppresses side-lobes that plague traditional approaches, and large and small earthquakes are mapped to a similar output value (in contrast to BP, where outputs scale with the number of observed picks). The technique has been tested on real data from the NC network of northern California, using machine-learning-produced picks as input, where we recover over 95% of previously reported earthquakes > M1 throughout the interval 2000 – 2020. Initial applications suggest that the GNN will reveal at least 5x more previously undetected earthquakes < M1, that will reveal active fault structure in unprecedented detail. By enabling us to train more rapidly, the computing capabilities of Neocortex can help us to significantly enhance these results further. With the Neocortex computing platform, we will have the necessary capabilities to optimize the GNN more thoroughly over the hyperparameter space, tune the synthetic data generator to reduce the covariate shift between synthetic and real data, and add additional modules to the architecture, such as an initial full waveform processing layer. We will also be able to perform an ablation analysis to analyze the performance of the individual components of the GNN more thoroughly, which can help identify aspects of the architecture that can be improved, and assist other researchers in adapting our GNN to their own applications.



Project Title: Deep learning analysis for single-molecule ligand-receptor interaction

Peiwen Cong, Georgia Institute of Technology

Project Abstract: Ligand-receptor interactions' biophysical and biochemical characteristics govern many biological processes, particularly cell signal transduction, where extracellular molecular bindings relay messages through membranes to initiate intracellular responses. Our in-situ nanotools: micropipette adhesion frequency assay and biomembrane force probe have delivered cutting-edge knowledge about the single-molecule level ligand-receptor interaction right in their native micro-environments. At the core of these nanotools, their ultra-sensitive kinetic measures heavily rely on 1-dimensional deformation of the micropipette-aspirated red blood cell (RBC). Here, we propose to improve them with the convolutional neural network (CNN) for feature extraction followed by the recurrent neural network (RNN) for dynamic event detection, which potentially leads to more precise quantifications, insightful interpretations, and accurate decision-making. The unique opportunity created by Neocortex can ease the challenges and accelerate the progress to integrate these deep learning components into our current ligand-receptor kinetic analysis workflows.



Project Title: Artificial Intelligence Framework to Predict Wall Stresses on Aneurysms

Timothy Chung, University of Pittsburgh

Project Abstract: Abdominal aortic aneurysm (AAA) is the progressive, degenerative dilation of the terminal aorta; without treatment AAAs can undergo rupture, an often-fatal event that is the 13th most common cause of death in the US. Clinical intervention occurs when the maximum diameter exceeds 5.5 cm, a diameter beyond which it is thought that the risk of rupture is greater than risk of intervention. A biomechanical tool, the rupture potential index (RPI) was developed by our group2,3 through computational finite element analysis (FEA) and experimental uniaxial extension testing. RPI is defined as the ratio of transmural wall stress (driven by systolic pressure) and failure strength (the maximum strength the aneurysm wall can support). However, the RPI has not translated clinically due to the heavy computational requirement, the reliance on manual segmentation methods and the relatively low number of patient images studied (the combined number of significant studies investigating peak wall stress is around 348 where the RPI was not always calculated10). We use a combination of machine learning techniques to automatically segment aneurysm geometries and perform predictive modeling of wall stresses based on many computational simulations. Comparisons of shape and biomechanical indices are quantified to determine the reliability of the automatically reconstructed AAA geometry. Preliminary results have shown that we are able to predict wall stresses within 0.34% based on shape indices without the need for computational simulations. Increased sample size will allow us to further develop a clinically translatable tool to predict the biomechanical status of AAA.



Project Title: Improving predictability of anomalies in vitreous silica using uncertainty-based adversarial attack

Rafael Gomez-Bombarelli, Massachusetts Institute of Technology

Project Abstract: Understanding the structure of glassy materials represents a tremendous challenge for both experiments and computations. One of the most common glass materials, vitreous silica, has been used in a plethora of commercial and scientific applications, but is still not well understood despite decades of research. Sharing the same tetrahedral order as water, vitreous silica has been known to exhibit several anomalous behaviors in its physical properties, including a temperature dependent density minimum around 900˚C and density maximum around 1500˚C. Due to such anomalies, many empirical force fields and machine learning interatomic potentials have shown to be volatile in predictions of physical properties that accurately reflects mechanical and density anomalies in silica. Here, we exploit automatic differentiation strategy in graph neural network (GNN) potentials to discover highly uncertain glass configurations such that structural configurations that are responsible for anomalies in vitreous silica have a higher likelihood to be learned. The automatic differentiation strategy is done by performing adversarial attack on a differentiable uncertainty metric. When combined into an active learning loop, only a small amount of expensive ab initio molecule dynamics trajectories is needed as the initial training dataset.



Project Title: Discerning the complex pattern of brain networks related to psychotic disorders

Konasale Prasad, University of Pittsburgh

Project Abstract: Schizophrenia is a severe and chronic brain disorder associated with delusions, hallucinations, disorganized thoughts, and cognitive impairments. Available treatments are symptomatic and do not provide lasting recovery in majority of persons with schizophrenia. Therefore, better elucidation of neurobiology of this illness may help design new treatments. Studies to date clearly support that schizophrenia and related psychotic disorders are dysconnection syndromes. Hence, there is tremendous impetus to understand the nature and causes of dysconnectivity. However, current efforts are directed at examining networks built on one modality of data such as diffusion or functional connectivity. Our lab, the CONCEPT Lab, uses multimodal MRI data, e.g., structural, diffusion-weighted, and functional imaging data to construct multiplex multilayer networks to delineate dysconnectivity related to schizophrenia at different levels. Using this approach, our goal is to understand schizophrenia networks on a nodal and global level as well as at the subject and the group level. Differences between the brain networks of persons with schizophrenia and healthy subjects have been extensively reported. Recent studies have been conducted using graph theoretical approaches. Although this approach provides important leads in elucidating network architecture and provide clues on potential functional impact, it does not provide means to examine the entire graph and characterize the differences. For examples, it does not answer questions on whether there are differences in the patterns of network architecture, what features of nodes tend to affect the strength of connections and whether the edge-centric pattern can help in classifying the networks. Graph Neural Networks (GNNs) are a way to identify graph difference and categorize networks. This set of machine learning approaches can help draw inferences on the nodes, edges, and the graph level characteristics. Using GNN, graphs and nodes can be classified into groups, and we will be able to make edge, or link, predictions. This will allow us to understand which graph connectivity pattern is unique to patients versus controls on a global and nodal level. Further, we are also interested in finding out if there are subgroups within patients since it is well known that schizophrenia is a heterogeneous disorder. These efforts will go a long way to help us understand the underlying pathology and how to better treat each network classification.



Project Title: Exploring Wafer Scale Engine on fluid dynamics simulations for atmospheric and other applications.

Siddhartha Ghosh, National Center for Atmospheric Research

Project Abstract: Numerical weather prediction (NWP) models are often implemented using well known finite difference or finite volume numerical schemes characterized as low arithmetic intensity algorithms. They are typically limited by memory bandwidth and latency and parallelized on a x-y (lat-lon) grid, with a small number of vertical levels, with an order of magnitude of 10. As such, they appear to be a great fit for the WSE architecture. The efforts supported by this allocation request would seek to assess performance capacity of the WSE architecture for stencil-based numerical approaches that underpin many NWP codes in existence today.



Project Title: Robust Fault Detection of Cooling Systems using Multimodal Fusion

Han Hu, University of Arkansas

Project Abstract: The ever-increasing vehicle electrification has led to critical challenges to electronic cooling. High power pulsed load may cause faults of the cooling systems (e.g., boiling crisis) that may eventually lead to overheating and device failures. Due to the stochasticity of the cooling process, traditional physics-based thermal models are not capable of handling transient heat loads. Deep learning models have been developed for fault detection during two-phase cooling based on single-channel signals but suffer from low generalizability and interpretability. To address this issue requires considering creative and novel data analytic approaches involving theoretical mathematics. A recent subject that provides a promising approach is called topological data analysis (TDA) and its principle tool, persistent homology (PH). The proposed project seeks to develop an interpretable fusion model for two-phase cooling fault detection that leverages multimodal sensor signals from cooling systems (e.g., temperature, pressure, sound, and images), the pre and post-processing power, and internal DL modeling capabilities of TDA and PH, and attention-based interpretation to improve model accuracy, reliability, and interpretability. Multimodal signals from heterogeneous sources will be collected to create a database for two-phase cooling data. A multimodal fusion network will be developed and trained using the database with integrated TDA/PH capabilities for data compression and feature engineering and the interpretability of the network will be examined through attention maps-based analysis.



Project Title: Analysis of differential dependency on large-scale RNA expression networks

Gil Speyer, Arizona State University

Project Abstract: The dependency between genes within a functional biological pathway can be contrasted between two conditions through the calculated divergence between distributions of dependency networks [1]. The EDDY (Evaluation of Differential DependencY) is a statistical test to identify gene sets, a.k.a., pathways, that are significantly “rewired”, by leveraging a probabilistic framework with resampling and permutation, aided by the incorporation of annotated gene sets, to demonstrate superior sensitivity when compared to other methods. Further, the ample and independent computation coupled with manageable memory footprint incurred by this statistical rigor positions EDDY as an excellent subject for graphical processing unit (GPU) acceleration [2]. Custom kernels written in CUDA decompose the independence test loop, network construction, network enumeration, and Bayesian network scoring to accelerate the computation. The algorithm has recently been used to discover novel drugs for pulmonary hypertension, repurposed from small compounds that are designed for cancer treatments [3]. The Neocortex RFP provides an opportunity to pursue new directions with EDDY analysis, such as the interrogation of larger gene sets and the development of statistical sampling strategies for larger (e.g. single-cell) RNA expression sample sets. [1] Jung S, Kim S. EDDY: a novel statistical gene set test method to detect differential genetic dependencies. Nucleic Acids Res. 2014 Apr;42(7):e60. doi: 10.1093/nar/gku099. Epub 2014 Feb 5. PMID: 24500204; PMCID: PMC3985670. [2] G. Speyer, J. Rodriguez, T. Bencomo and S. Kim, "GPU-Accelerated Differential Dependency Network Analysis", 2018 26th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP), 2018, pp. 410-414, doi: 10.1109/PDP2018.2018.00072. [3] Negi V, Yang J, Speyer G, Pulgarin A, Handen A, Zhao J, Tai YY, Tang Y, Culley MK, Yu Q, Forsythe P, Gorelova A, Watson AM, Al Aaraj Y, Satoh T, Sharifi-Sanjani M, Rajaratnam A, Sembrat J, Provencher S, Yin X, Vargas SO, Rojas M, Bonnet S, Torrino S, Wagner BK, Schreiber SL, Dai M, Bertero T, Al Ghouleh I, Kim S, Chan SY. Computational repurposing of therapeutic small molecules from cancer to pulmonary hypertension. Sci Adv. 2021 Oct 22;7(43):eabh2794. doi: 10.1126/sciadv.abh2794. Epub 2021 Oct 20. PMID: 34669463.



Project Title: Molecular mutagenesis by biological graphs

Ryan Mills, University of Michigan

Project Abstract: Variation in gene expression is a complex process correlated with multiple molecular features, such as chromatin structure, epigenetic marks, gene-gene and protein/protein interactions, as well as post-transcriptional modifications. The assayable molecular contexts of a locus (such as methylation, histone modification, and chromosome conformation) are suggestive, not causal: no single feature is enough to reveal the entirety of genomic interactions. We are developing new methods representing genes as tissue-specific, multilayer heterogenous networks based on regulatory and interaction assays. The graph structure is then trained to regress quantitative measurements of gene expression through an attention-based graph neural network. Such an approach allows us to mutagenize the features within the structure to query the relative impact of molecular changes on expression at tissue-specific and gene-specific resolution. Our goal is to understand and discover the patterns of molecular combinations that come together and affect the regulation of gene expression, and to figure whether or not the varying impact of molecular features surrounding a gene create a type of regulatory language that describes gene function and genomic architecture. To do this, we require advanced GPUs for training large models, owing to the fact that genomics data is infamously large and heterogenous. The novel graph structures we are using regularly require more memory than multiple 32GB GPUs can provide.



Project Title: High-throughput and data-mining search for new rare-earth-free permanent magnetic borides

Boniface FOKWA, University of California, Riverside

Project Abstract: The project will focus on applying machine learning to discover new metal borides with high magnetocrystalline anisotropy and high Curie temperatures, with the long-term goal of realizing rare-earth free permanent magnets (PMs) that can be competitive or surpass the current PMs. The creation of DFT databases (predicted structures) and accumulated experimental data (e.g. the Materials Project and CITRINE INFORMATICS) has opened new avenues for designing new materials with targeted properties. Particularly, machine learning techniques have provided the ability to use these data sets to rapidly predict the preferred crystal structure or physical properties of intermetallics. Specifically, we will use subsets of known and predicted structures that will serve as training sets for the machine learning algorithm, while the large databases available (Materials Project and ICSD) will be used to expand the training data sets, which will then enable the prediction of new candidate structures.



Project Title: Ocean Reanalysis Data-Driven Deep Learning Forecast

Ruoying He, North Carolina State University

Project Abstract: In this project, a hybrid model of empirical orthogonal function (EOF)-complete ensemble empirical mode decomposition (CEEMD)-artificial neural network (ANN) will be developed to enable efficient and accurate ocean forecast for the Northwest Atlantic Ocean. EOF analysis transforms the spatial-temporal prediction problem into a time series prediction problem. It can reduce computational effort and dimensionality, capture spatial relationships, and consider correlations between different variables. Then, CEEMD can improve the predictability of nonlinear time series. ANNs are subsequently used to predict CEEMD-derived time series from the PCs corresponding to EOFs. This work is expected to lay a solid foundation for AI research in oceanography, and provide a temporal-spatial domain prediction of ocean conditions that can be used for marine hazards forecast and mitigation.



Project Title: Characterizing DNN training on Neocortex

Xulong Tang, University of Pittsburgh

Project Abstract: This project aims to conduct characterization of the new hardware, Neocortex, designed for AI applications. We aim to study the hardware execution statistics including the execution bottlenecks. The results and observations will help develop better application mappings as well as improve architecture designs for executing AI applications on Neocortex.



Project Title: Interpretable Deep Modeling of SARS-CoV-2 Sequences

Gail Rosen, Drexel University

Project Abstract: We propose to use Neocortex to generate interpretable deep learning models of how Sars-CoV-2 (COVID-19) sequence variation affects viral phenotype, viral evolution, and host phenotype / clinical outcomes. To date, nearly 4.9 million viral genome sequences have been collected and submitted to the GISAID Initiative’s central database (http://www.gisaid.org). This volume of data represents an unprecedented opportunity to learn more about this novel disease, and how it is evolving and changing. Building from our research group’s prior work on interpretable deep learning, we employ a Transformer architecture, using an optional CNN filter to reduce model complexity, with a distinct sequence-wide attention layer for the purpose of interpretability. Our framework provides for two levels of interpretability, by generating both attention graphs that reveal important sequence features, as well as embeddings that can be used to visualize underlying patterns in sequence variation. We will use the Neocortex architecture to analyze larger COVID-19 sequence data sets and improve our deep modeling framework.



Project Title: Exploring Interpretable Deep Learning from Information Theoretic Perspective: Modeling and Applications.

Huajie Shao, The College of William & Mary

Project Abstract: Despite the great success of AI techniques in many different applications, such as computer vision, self-driving cars, and robotics, it is still hard for humans to fully understand and interpret them. The goal of this proposal is to reason and understand deep learning models by learning the disentangled representations. Disentangled representation learning aims at learning a low-dimensional representation that consists of multiple interpretable latent factors of the observations. The semantically meaningful latent factors help us better explain which one affects the classification and prediction accuracy. However, learning disentangled representations based on Variational Autoencoders (VAE) models pose two major challenges. First, many existing models require prior knowledge of some data generative factors from human annotation to train the model, costing lots of human labor. The second challenge is the trade-off problem between reconstruction and disentanglement learning. This proposal intends to solve these two issues by applying control theory, information bottleneck, self-supervised learning, and casual representation learning. Finally, we plan to apply the disentangled representations from our models to improve downstream tasks, such as image generation, reinforcement learning, and text generation. The proposed solution requires a high computing capability, on-device memory, and inter-device communication throughput. We believe the CS-1 WSE is a natural fit for our problem and is expected to significantly reduce the requirement for GPUs to train the proposed model.



Project Title: Large-scale Pre-training for Natural Language to Code Generation

Graham Neubig, Carnegie Mellon University

Project Abstract: This project aims to create pre-trained models for natural language to code\ngeneration, the task of generating programs from natural language descriptions. This has the potential to make\nprogramming easier, and perhaps even allow for command and control of computers by non-programmers. Our research\nteam has a large amount of experience in this area, but lacks resources to scale models to very large datasets such\nas training on the entirety of github, which this proposal aims to address. We also plan to examine novel models for\ncode generation based on non-parametric models, which look up related examples in a training corpus, which is\nimportant both for performance and interpretability. All models we develop will be made available open source for the community to use.



Project Title: Automated sleep states classification for wide-field calcium imaging using deep learning

Mark Anastasio, University of Illinois Urbana-Champaign

Project Abstract: Wide-field calcium imaging (WFCI) with genetically encoded calcium indicators enables spatial-temporal recordings of neuronal depolarization in mice on a sub-second temporal scale with simultaneous examination of neurovascular coupling and cell type specificity. When applied to the study of sleep, it requires human experts to manually score hours of WFCI recordings by use of adjunct electroencephalogram (EEG) and electromyogram (EMG) signals. However, this process is tedious, time-consuming and often suffers from low inter- and intra-rate reliability and invasiveness. Therefore, an automated sleep states classification method applied on WFCI sequential data is desired. Given that sleep is a cyclic process and the high temporal resolution provided by WFCI, it is of our interest to investigate the use of deep learning models which exploits temporal dependencies among events to classify sleep states on a large-scale dataset of spatial-temporal sequential WFCI recordings. In addition, uncovering the spatial-temporal features underlying calcium dynamics in mice by use of deep learning may enable future sleep-focused studies with WFCI.



Project Title: Impute cell free DNA fragmentation pattern from low-coverage whole-genome sequencing

Yaping Liu, Cincinnati Children's Hospital Medical Center

Project Abstract: TBA



Project Title: Large-scale spiking network models to explain dynamics of visual perception and working memory

Lyle Muller, Salk Institute for Biological Studies

Project Abstract: TBA



Project Title: An Integrated Machine Learning Platform of GWAS (Genome Wide Association Study) and Epigenetics for Personalized Bladder Cancer Clinical Applications

Zhiyong Zhang, Stanford University

Project Abstract: TBA



Project Title: AI Understanding of Ultrasound Scans: Semantic Segmentation and Diagnosis Trained with Simulation and Genetic/Back-Prop Hybrid Training

John Galeotti, Carnegie Mellon University

Project Abstract: TBA



Project Title: L2RACE CAR RNN control

Arthur Lobo,

Project Abstract:



Project Title: Identifying Actor Characteristics in State-Linked Information Operations Using Twitter Data and Graph Based Neural Networks

John Wohlbier, Carnegie Mellon University

Project Abstract: TBA



Project Title: Voxel Pretraining for Few-Shot Learning

William Bradley, Mirabolic Consulting

Project Abstract: Because large, labelled medical imaging data can be difficult to collect, we are interested in few shot learning problems related to medical imaging (MRI and CT scans). To that end, we are interested in pretraining a voxel-based transformer network using a masked language model (MLM) through modifications of BERT. Voxel count grows cubically with edge size and a standard transformer’s memory usage grows quadratically with features, so traditional models can only examine voxels within a very small radius of a target point. We hope that the more powerful memory bandwidth of the Neocortex will allow us to increase this limit, and that that larger context will improve the model performance.



Project Title: Training of conservative physics-informed neural networks (CPINN) to solve the incompressible Navier-Stokes equation at high Reynolds number

George Karniadakis, Brown University

Project Abstract: TBA



Project Title: Simulation and Benchmarking of Quantum Machine Learning

Jason Larkin, Carnegie Mellon University

Project Abstract: TBA