Skip to the page content Skip to primary navigation Skip to the search form Skip to the audience-based navigation Skip to the site tools and log-in Information about website accessibility

Computation Research Center

Information About:


Human Brain Atlas

Keith Bush et al., Department of Computer Science

Understanding how anatomical brain regions self-organize into functional units is an important problem in neuroimaging. Researchers in the Department of Computer Science at UALR have been working closely with researchers at the Brain Imaging Research Center at UAMS to automatically construct an atlas (3-dimensional anatomical map) of the functional brain regions directly from fMRI BOLD signal. We are currently cross-validating parcellations built from subjects recorded during mental “rest”. An example brain parcellation (saggital plane view) is depicted in the figure (colored regions denote functional units). Future work will involve constructing a functionally-modulated anatomical atlas from both resting-state data and data sampled while subject undergo cognitive tasks.


Computational History

Keith Bush et al., Department of Computer Science

N-Gram is a tool that analyzes the frequency of word occurrences with respect to all word occurrences in the Google Books dataset, the single largest digital corpus in the world, which is proprietary to Google and unavailable to the general public. For research purposes, we have acquired a smaller, open-source dataset, the HathiTrust corpus, comprised of approximately 250,000 texts published between the years 1800 and 1920. We have constructed an in-house N-Gram computation system to examine the robustness of the statistical similarity between the HathiTrust corpus and the Google Books corpus. An example comparison between N-gram frequency plots for the word “case” are depicted in the figure below. Validation of statistical similarities between the HathiTrust and Google Books datasets on are ongoing.


The Link Between Supermassive Black Holes and Dark Matter Halos in Disk Galaxies

Marc Seigar et al., Department of Physics and Astronomy

The discovery of a relationship between supermassive black hole (SMBH) mass and spiral arm pitch angle (P) is evidence that SMBHs are tied to the overall secular evolution of a galaxy. The discovery of SMBHs in late-type galaxies with little or no bulge suggests that an underlying correlation between the dark matter halo concentration and SMBH mass (MBH) exists, rather than between the bulge mass and MBH. We have recently published a paper in which we measured P using a two-dimensional fast Fourier transform and estimate the bar pattern speeds of 40 barred spiral galaxies from the Carnegie-Irvine Galaxy Survey. The pattern speeds were derived by estimating the gravitational potentials of our galaxies from Ks-band images and using them to produce dynamical simulation models. The pattern speeds allow us to identify those galaxies with low central dark halo densities, or fast rotating bars, while P provides an estimate of MBH.

Results of the sticky-particle simulations

Monte Carlo Modeling of Protein Structures and Polymeric Materials

Jerry A. Darsey et al., Department of Chemistry

The use of metal hydrides as a storage solution for the hydrogen economy has commanded nation attention. Since there are a multitude of metal hydride alloys and even these alloys are subject to multiple surfaces when deposited our group is focused on using ab initio software and artificial neural networks (ANN) to both help validate initial experimental findings and explore new alloy combinations.

Energy profiles of H2 on (001) Mg and (001) Mg doped

Solving Curl-curl Equations in Three-dimentional Space

Xiu Ye and Lin Min, Department of Mathematics and Statistics

This project solves the curl-curl equation by using the newly introduced weak Galerkin finite element methods. This problem is one of the most important partial differential equations in electromagnetics. We have developed the numerical scheme for solving curl-curl problems by weak Galerkin finite element methods. Several numerical experiments are tested accordingly for a very coarse mesh on which the error profile for velocity was obtained, and results for velocity and pressure on fine meshes are obtained through computations.

The cube meshes

Inference of Genetic Regulatory Networks Using Regularized Likelihood with Covariance Estimation

Nidhal Bouaynaya et al., Department of Systems Engineering

We are investigating the problem of reverse-engineering the connectivity matrix of genetic regulatory networks from a limited number of measurements as a regularized multivariate regression problem. The regularization term incorporates the prior knowledge of sparsity of genetic regulatory networks. Moreover, the genetic profiles within a measurement are assumed to be correlated with a full covariance structure. The proposed algorithm computes a sparse estimate of the connectivity matrix that accounts for correlated errors using regularized likelihood. We have tested and validate our approach using synthetically generated networks.

Percentage error versus number of measurements for different network sizesCritical Measurements for different network size

Multivariate Tests for the Analyses of Differentially Expressed Gene Sets

Yasir Rahmatallah et al., UAMS

The analysis of differentially expressed gene sets (pathways) became a routine in the analyses of gene expression data (microarrays and RNA-Seq). There is a multitude of tests available, ranging from aggregation tests that summarize gene-level statistics for a gene set characterization to true multivariate tests, such as Hotelling’s T2 and N statistics, accounting for the correlation structure between genes. However, non-parametric multivariate distribution-free tests that do not rely on assumptions (and, typically, expression data do not conform to these assumptions), were never considered in this context.

Detection power of different methods

Modeling and Fabricating Nanotoroid Antenna Pairs to Plasmon-enhance Solar Photovoltaics

Magda O. El-Shenawee et al., UA-Fayetteville

It is well known that conventional Photovoltaic (PV) cells cannot use about 55% of the energy of sunlight. Plasmonic nano-antennas have shown a potential, when properly tuned, to increase the PV efficiency. El-Shenawee’s computational work in the current NSF project aims to design and fabricate gold nanotoroid antennas and to measure the anticipated increased efficiency of the PV. A main challenge is the multiscale nature of the model where the silicon surface is much larger than the nanotoroid. To obtain correct results, the Si surface needs to be of size of several wavelengths in the band of interest which is 400 nm to 1200 nm. Although her group is working on implementing accelerators such as the fast multipole method (FMM), large scale cases need to be executed first to investigate the memory and CPU time requirements of the method of moments computational model. The Erbium (the 4TB-memory system) at UALR has been a main high performance resource in this research.
Nanotoroids on Silicon Photovoltaic for energy enhancement

HPC System Performance Predictions

Kenji Yoshigoe et al., Department of Computer Science

High Performance Computing (HPC) systems are assembled from manufacturer combinations of hardware components and software package bundles, each having unique strengths, capabilities and operational speeds. Each HPC system owner, prospective or current, also has unique combinations of jobs that need to be performed with a minimum of resource consumption and a maximum of efficiency. Thus, there needs to be a well-defined method an ordinary user, of limited resources, can use to determine which HPC system will best fulfill their particular job load set. We have built and validated the functionality of the HPC simulator: we have 1) measured the performance of HPC of known size, 2) fed the system statistics to the developed package, and 3) run the simulation. We found out that Job request size/frequency, job inter-arrival time, and service time of the HPC simulation all had 3% or less error from those of the real HPC system. Furthermore, the HPC simulator was able to predict the performance of HPC system of a different size by merely utilizing the system statistics of the HPC system of a base size. The ability of this simulator to achieve “statistically equivalent” results for an HPC system opens the doors for researching the operational impacts of different HPC system hardware or software configurations and capabilities.
HPC SimulatorService Time for HPC size = 256 nodes

Acknowledgement - Research is supported in part by the National Science Foundation through grants CRI CNS-0855248, EPS-0701890, EPS-0918970, MRI CNS-0619069, and OISE-0729792.

Updated 9.6.2014