1 | Konagurthu Arun S., Subramanian Ramanan, Allison Lloyd, Abramson David, Stuckey Peter J., Garcia de la Banda Maria, Lesk Arthur M. | Universal Architectural Concepts Underlying Protein Folding Patterns What is the architectural 'basis set' of the observed universe of protein structures? Using information-theoretic inference, we answer this question with a dictionary of 1,493 substructures—called concepts—typically at a subdomain level, based on an unbiased subset of known protein structures. Each concept represents a topologically conserved assembly of helices and strands that make contact. Any protein structure can be dissected into instances of concepts from this dictionary. We dissected the Protein Data Bank and completely inventoried all the concept instances. This yields many insights, including correlations between concepts and catalytic activities or binding sites, useful for rational drug design; local amino-acid sequence–structure correlations, useful for ab initio structure prediction methods; and information supporting the recognition and exploration of evolutionary relationships, useful for structural studies. An interactive site, Proçodic, at http://lcb.infotech.monash.edu.au/prosodic (click), provides access to and navigation of the entire dictionary of concepts and their usages, and all associated information. This report is part of a continuing programme with the goal of elucidating fundamental principles of protein architecture, in the spirit of the work of Cyrus Chothia. | Frontiers in Molecular Biosciences, Vol 7, 2021, pp491, DOI: 10.3389/fmolb.2020.612920, ISSN:2296-889X | 2021 |  |
2 | Parashar, M. and Abramson, D. | Translational Computer science for Science and Engineering | IEEE Computing in Science & Engineering, DOI 10.1109/MCSE.2021.3109962 | 2021 |  |
3 | Lesk, A.M., Konagurthu, A.S., Allison, L., de la Banda, M.G., Stuckey, P.J. and Abramson, D. | Computer modelling of a potential agent against SARS‐Cov ‐2 (COVID ‐19) protease. We have modeled modifications of a known ligand to the SARS-CoV-2 (COVID-19) protease, that can form a covalent adduct, plus additional ligand-protein hydrogen bonds. | Proteins. doi:10.1002/prot.25980 | 2020 |  |
4 | Abramson, D., Parashar, M., Arzberger, P. | Translation computer science – Overview of the special issue The special issue considers 11 projects that have performed Translational Computer Science. Interestingly, we observe that these groups have been using this research methodology without it being recognised - in many cases they just did it 'because it was the right thing to do' | Journal of Computational Science, 2020, 101227, ISSN 1877-7503, https://doi.org/10.1016/j.jocs.2020.101227. | 2020 |  |
5 | Abramson D., Jin C., Luong J., Carroll J. | A BeeGFS-Based Caching File System for Data-Intensive Parallel Computing Modern high-performance computing (HPC) systems are increasingly using large amounts of fast storage, such as solid-state drives (SSD), to accelerate disk access times. This approach has been exemplified in the design of “burst buffers”, but more general caching systems have also been built. This paper proposes extending an existing parallel file system to provide such a file caching layer. The solution unifies data access for both the internal storage and external file systems using a uniform namespace. It improves storage performance by exploiting data locality across storage tiers, and increases data sharing between compute nodes and across applications. Leveraging data striping and meta-data partitioning, the system supports high speed parallel I/O for data intensive parallel computing. Data consistency across tiers is maintained automatically using a cache aware access algorithm. A prototype has been built using BeeGFS to demonstrate rapid access to an underlying IBM Spectrum Scale file system. Performance evaluation demonstrates a significant improvement in the efficiency over an external parallel file system. | Supercomputing Frontiers. SCFA 2020. Lecture Notes in Computer Science, vol 12082. Springer, Cham. | 2020 |  |
6 | Dinh, M., Trung, C. and Abramson. | Tracking scientific simulation using online time-series modelling The increase in compute power and complexity of supercomputing systems requires the decrease in the feature size and the supply voltage of internal components. Such development makes unintended errors such as soft errors, potentially caused by random bit flips, inevitable because of the huge size of the resources (such as CPU cores and memory). In this paper, we discuss a non-parametric statistical modelling technique to implement a soft error detector. By exploring temporal autocorrelation within key variables of a running scientific simulation, we introduce an automatic anomaly detection technique in which runtime data from a time-step based simulation can be converted into a time series, and a time series modelling technique can be used to identify soft errors at runtime. Experiments with LAMMPS, a high-performance molecular dynamics simulator, and with PLUTO, an open-source astrophysical code, reveal that the time-series based detector is subjected to less than 3% of both false-positive rate and false-negative rate while incurring only 6% performance overheads. | The 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, May 11-14, 2020, Melbourne, Victoria, Australia. | 2020 |  |
7 | Endrei, M., Jin, C., Dinh, M. N., Abramson, D., Poxon, H., DeRose, L., & de Supinski, B. R. | Statistical and machine learning models for optimizing energy in parallel applications. Rising power costs and constraints are driving a growing focus on the energy efficiency of high performance computing systems. The unique characteristics of a particular system and workload and their effect on performance and energy efficiency are typically difficult for application users to assess and to control. Settings for optimum performance and energy efficiency can also diverge, so we need to identify trade-off options that guide a suitable balance between energy use and performance. We present statistical and machine learning models that only require a small number of runs to make accurate Pareto-optimal trade-off predictions using parameters that users can control. We study model training and validation using several parallel kernels and more complex workloads, including Algebraic Multigrid (AMG), Large-scale Atomic Molecular Massively Parallel Simulator, and Livermore Unstructured Lagrangian Explicit Shock Hydrodynamics. We demonstrate that we can train the models using as few as 12 runs, with prediction error of less than 10%. Our AMG results identify trade-off options that provide up to 45% improvement in energy efficiency for around 10% performance loss. We reduce the sample measurement time required for AMG by 90%, from 13 h to 74 min. | The International Journal of High Performance Computing Applications, 33(6), 1079–1097. https://doi.org/10.1177/1094342019842915. | 2019 |  |
8 | Abramson, D. and Parashar, M. | Translational Research in Computer Science There are benefits to formalizing translational computer science (TCS) to complement traditional modes of computer science research, as has been done for translational medicine. TCS has the potential to accelerate the impact of computer science research overall. | Computer, vol. 52, no. 9, pp. 16-23, Sept. 2019, doi: 10.1109/MC.2019.2925650 | 2019 |  |
9 | Abramson, D., Carroll, J., Jin, C., Mallon, M., van Iperen, Z., Nguyen, H., McRae, A. and Ming, L. | A Cache-based Data Movement Infrastructure for On-demand Scientific Cloud Computing As cloud computing has become the de facto standard for big data processing, there is interest in using a multi-cloud environment that combines public cloud resources with private on-premise infrastructure. However, by decentralizing the infrastructure, a uniform storage solution is required to provide data movement between different clouds to assist on-demand computing. This paper presents a solution based on our earlier work, the MeDiCI (Metropolitan Data Caching Infrastructure) architecture. Specially, we extend MeDiCI to simplify the movement of data between different clouds and a centralized storage site. It uses a hierarchical caching system and supports most popular infrastructure-as-a-service (IaaS) interfaces, including Amazon AWS and OpenStack. As a result, our system allows the existing parallel data intensive application to be offloaded into IaaS clouds directly. The solution is illustrated using a large bioinformatics application, a Genome Wide Association Study (GWAS), with Amazons AWS, HUAWEI Cloud, and a private centralized storage system. The system is evaluated on Amazon AWS and the Australian national cloud. | Supercomputing Asia 2019, March 11-14th, 2019, Singapore. | 2019 |  |
10 | Endrei, M., Jin, C., Dinh, M., Abramson, D., Poxon, H. DeRose. L., and De Supinski, B. | Energy Efficiency Modeling of Parallel Applications Energy efficiency has become increasingly important in high performance computing (HPC), as power constraints and costs escalate. Workload and system characteristics form a complex optimization search space in which optimal settings for energy efficiency and performance often diverge. Thus, we must identify trade-off options for performance and energy efficiency to find the desired balance between them. We present an innovative statistical model that accurately predicts the Pareto optimal performance and energy efficiency trade-off options using only user-controllable parameters. Our approach can also tolerate both measurement and model errors. We study model training and validation using several HPC kernels, then explore the feasibility of applying the model to more complex workloads, including AMG and LAMMPS. We can calibrate an accurate model from as few as 12 runs, with prediction error of less than 10%. Our results identify trade-off options allowing up to 40% improvement in energy efficiency at the cost of under 20% performance loss. For AMG, we reduce the required sample measurement time from 13 hours to 74 minutes (about 90%). | IEEE Supercomputing SC18, Dallas Texas, November 2018. https://doi.org/10.1109/SC.2018.00020 | 2018 |  |
11 | Chao, J., de Supinski, B., Abramson, D., Poxon, H., De Rose, L., Dinh, M., Endrei, M., Jessup, E. | A survey on software methods to improve the energy efficiency of parallel computing in the application layer Energy consumption is one of the top challenges for achieving the next generation of supercomputing. Codesign of hardware and software is critical for improving energy efficiency (EE) for future large-scale systems. Many architectural power-saving techniques have been developed, and most hardware components are approaching physical limits. Accordingly, parallel computing software, including both applications and systems, should exploit power-saving hardware innovations and manage efficient energy use. In addition, new power-aware parallel computing methods are essential to decrease energy usage further. This article surveys software-based methods that aim to improve EE for parallel computing. It reviews the methods that exploit the characteristics of parallel scientific applications, including load imbalance and mixed precision of floating-point (FP) calculations, to improve EE. In addition, this article summarizes widely used methods to improve power usage at different granularities, such as the whole system and per application. In particular, it describes the most important techniques to measure and to achieve energy-efficient usage of various parallel computing facilities, including processors, memories, and networks. Overall, this article reviews the state-of-the-art of energy-efficient methods for parallel computing to motivate researchers to achieve optimal parallel computing under a power budget constraint. | International Journal of High Performance Computing Application, Vol 31, Issue 6, 2017, pp 517-549, doi:10.1177/1094342016665471. | 2017 |  |
12 | Subramanian, R., Allison, L., Stuckey, P. J., Garcia de la Banda, M., Abramson, D., Lesk, A. M., and Konagurthu, A. | Statistical compression of protein folding patterns and inference of recurrent sub-structural themes Computational analyses of the growing corpus of three-dimensional (3D) structures of proteins have revealed a limited set of recurrent substructural themes, termed super-secondary structures. Knowledge of super-secondary structures is important for the study of protein evolution and for the modeling of proteins with unknown structures. Characterizing a comprehensive dictionary of these super-secondary structures has been an unanswered computational challenge in protein structural studies. This paper presents an unsupervised method for learning such a comprehensive dictionary using the statistical framework of lossless compression on a database comprised of concise geometric representations of protein 3D folding patterns. The best dictionary is defined as the one that yields the most compression of the database. Here we describe the inference methodology and the statistical models used to estimate the encoding lengths. An interactive website for this dictionary is available at http://lcb.infotech.monash.edu.au/proteinConcepts/scop100/dictionary.html. | Data Compression Conference (DCC’2017), Salt Lake City, Utah, 4 – 7 April, 2017. | 2017 |  |
13 | Nguyen, H., van Iperen, Z., Raghunath, S., Abramson, D., Kipouros, T. and Somasekharan, S. | Multi-objective optimisation in scientific workflow Engineering design is typically a complex process that involves finding a set of designs satisfying various performance criteria. As a result, optimisation algorithms dealing with only single-objective are not sufficient to deal with many real-life problems. Meanwhile, scientific workflows have been shown to be an effective technology for automating and encapsulating scientific processes. While optimisation algorithms have been integrated into workflow tools, they are generally single-objective. This paper first presents our latest development to incorporate multi-objective optimisation algorithms into scientific workflows. We demonstrate the efficacy of these capabilities with the formulation of a three-objective aerodynamics optimisation problem. We target to improve the aerodynamic characteristics of a typical 2D airfoil profile considering also the laminar-turbulent transition location for more accurate estimation of the total drag. We deploy two different heuristic optimisation algorithms and compare the preliminary results. | 2017 International Conference on Computational Science, Zurich, 12-14 June, 2017. | 2017 |  |
14 | Nguyen, H., Bland, L., Roberts, T., Guru, S., Dinh, M and Abramson, D. | A computational pipeline for the IUCN risk assessment for Meso-American reef ecosystem Coral reefs are of global economic and biological significance but are subject to increasing threats. As a result, it is essential to understand the risk of coral reef ecosystem collapse and to develop assessment process for those ecosystems. The International Union for Conservation of Nature (IUCN) Red List of Ecosystem (RLE) is a framework to assess the vulnerability of an ecosystem. Importantly, the assessment processes need to be repeatable as new monitoring data arises. The repeatability will also enhance transparency. In this paper, we discuss the evolution of a computational pipeline for risk assessment of the Meso-American reef ecosystem, a diverse reef ecosystem located in the Caribbean, with the focus on improving the execution time starting from sequential and parallel implementation and finally using Apache Spark. The final form of the pipeline is a scientific workflow to improve its repeatability and reproducibility. | 13th IEEE eScience Conference, Auckland, New Zealand 24th – 27th October, 2017. | 2017 |  |
15 | Abramson, D., Carroll, J., Jin, C. and Mallon, M. | A Metropolitan Area Infrastructure for Data Intensive Science The increasing amount of data being collected from simulations, instruments and sensors creates challenges for existing e-Science infrastructure. In particular, it requires new ways of storing, distributing and processing data in order to cope with both the volume and velocity of the data. The University of Queensland has recently designed and deployed MeDiCI, a data fabric that spans the metropolitan area and provides seamless access to data regardless of where it is created, manipulated and archived. MeDiCI is novel in that it exploits temporal and spatial locality to move data on demand in an automated manner. This means that data only needs to reside locally in high speed storage whilst being manipulated, and it can be archived transparently in high capacity, but slower, technologies at other times. MeDiCI is built on commercially available technologies. In this paper, we describe these innovations and present some early results. | 13th IEEE eScience Conference, Auckland, New Zealand 24th – 27th October, 2017. | 2017 |  |
16 | Endrei, M., Jin, C., Dinh, M., Abramson, D., Poxon, H. DeRose. L., and De Supinski | A Bottleneck-centric Tuning Policy for Optimizing Energy in Parallel Programs In order to operate within power supply constraints, the next generation of supercomputers must be energy efficient. Both the capacities of the target HPC system architecture and workload features impact the energy efficiency of parallel applications. These system and workload factors form a complicated optimization search space. Further, a typical workload may consist of multiple algorithmic kernels each with different power consumption patterns. Using the Parallel Research Kernels as a case study, we identify key bottlenecks that change the energy usage pattern and develop strategies that improve energy efficiency by optimizing both workload and system parameters in an automated manner. The method provides significant insights to identify repeatable, statistically significant energy saving opportunities for parallel applications at various scales. | 2017 International Conference on Parallel Computing (ParCo), Bologna, Italy 12-15, September 2017. https://doi.org/10.3233/978-1-61499-843-3-265 | 2017 |  |
17 | Dinh, M., Abramson, D. and Chao, J. | Runtime verification of large scientific codes using statistics Runtime verification of large-scale scientific codes is difficult because they often involve thousands of processes, and generate very large data structures. Further, the programs often embody complex algorithms making them difficult for non-experts to follow. Notably, typical scientific codes implement mathematical models that often possess predictable statistical features. Therefore, incorporating statistical analysis techniques in the verification process allows using program's state to reveal unusual details of the computation at runtime. In our earlier work, we proposed a statistical framework for debugging large-scale applications. In this paper, we argue that such framework can be useful in the runtime verification process of scientific codes. We demonstrate how two production simulation programs are verified using statistics. The system is evaluated on a 20,000-core Cray XE6. | ICCS 2016, June 2016, San Diego. | 2016 |  |
18 | Dinh, M., Chao, J., Abramson, D. and Jeffery, C. | Runtime Verification of Scientific Computing: Towards an Extreme Scale Relative debugging helps trace software errors by comparing two concurrent executions of a program - one code being a reference version and the other faulty. By locating data divergence between the runs, relative debugging is effective at finding coding errors when a program is scaled up to solve larger problem sizes or migrated from one platform to another. In this work, we envision potential changes to our current relative debugging scheme in order to address exascale factors such as the increase of faults and the nondeterministic outputs. First, we propose a statistical-based comparison scheme to support verifying results that are stochastic. Second, we leverage a scalable data reduction network to adapt to the complex network hierarchy of an exascale system, and extend our debugger to support the statistical-based comparison in an environment subject to failures. | 5th Workshop on Extreme-Scale Programming Tools (ESPT), Held in conjunction with SC16, Salt Lake City, 13th November, 2016. | 2016 |  |
19 | Nguyen, H., Abramson, D., Guru, S., Sun, Y. | CoESRA: From virtual desktop to science gateway The Collaborative Environment for Ecosystem Science Research and Analysis (CoESRA) is a Web-based virtual desktop environment that integrates existing eResearch infrastructure in Australia for synthesis and analysis of scientific data for the ecological science community. Data synthesis and analysis is performed through scientific workflows. Even though this is powerful, it has a large learning curve for novice users. We have implemented a Web layer on top of an existing virtual desktop layer to hide this complexity from users. This new layer allows users to execute scientific workflows without requiring a desktop, and thus reduces the learning curve. The virtual desktop is still accessible for more advanced users. | Gateways 2016, November 2 – 3, San Diego, 2016. | 2016 |  |
20 | Quenette, S and Xi, Y and Mansour, J, Moresi, L and Abramson, D. | Underworld-GT applied to Guangdong, a tool to explore the geothermal potential of the crust Geothermal energy potential is usually discussed in the context of conventional or engi-neered systems and at the scale of an individual reservoir. Whereas exploration for conventional reser-voirs has been relatively easy, with expressions of resource found close to or even at the surface, explora-tion for non-conventional systems relies on temperature inherently increasing with depth and searching for favourable geological environments that maximise this increase. To utilitise the information we do have, we often assimilate available exploration data with models that capture the physics of the domi-nant underlying processes. Here, we discuss computational modelling approaches to exploration at a re-gional or crust scale, with application to geothermal reservoirs within basins or systems of basins. Tar-get reservoirs have (at least) appropriate temperature, permeability and are at accessible depths. We discuss the software development approach that leads to effective use of the tool Underworld. We ex-plore its role in the process of modelling, understanding computational error, importing and exporting geological knowledge as applied to the geological system underpinning the Guangdong Province, China. KEY WORDS: Underworld-GT, geothermal potential, computational modelling approach, Guangdong. | Journal of Earth Science, Vol 26, No 1, pp 78 – 88, Feb 2015, doi 10.1007/s12583-015-0517-z. | 2015 |  |
21 | Nguyen, H., Abramson, D, Kipouros, T, Janke, A and Galloway, G. | WorkWays: Interacting with Scientific Workflows WorkWays is a science gateway that supports human-in-the-loop scientific workflows. Human–workflow interactions are enabled by a dynamic Input Output (IO) model, which allows users to insert data into, or export data out of, a continuously running workflow. WorkWays has been used to solve a number of scientific problems where the user wishes to examine intermediate results in order to interact with the computation as the workflow progresses. This interactive capability not only provides better insights into the computation but also allows users to focus on different input parameter combinations. We have implemented a variety of data types and modes of interaction to account for a wide range of use cases and application domains. This paper demonstrates the applicability of WorkWays on three use cases from different domains. | Concurrency Computation: Practice and Experience, 27: 4377–4397, 21 May 2015, doi: 10.1002/cpe.3525. | 2015 |  |
22 | Beringer, J., Hutley, L. B., Abramson, D., Arndt, S. K., Briggs, P., Bristow, M., Canadell, J. G., Cernusak, L. A., Eamus, D., Edwards, A. C., Evans, B. J., Fest, B., Goergen, K., Grover, S. P., Hacker, J., Haverd, V., Kanniah, K., Livesley, S. J., Lynch, A., Maier, S., Moore, C., Raupach, M., Russell-Smith, J., Scheiter, S., Tapper, N. J. and Uotila, P. | Fire in Australian savannas: from leaf to landscape. Savanna ecosystems comprise 22% of the global terrestrial surface and 25% of Australia (almost 1.9 million km2) and provide significant ecosystem services through carbon and water cycles and the maintenance of biodiversity. The current structure, composition and distribution of Australian savannas have coevolved with fire, yet remain driven by the dynamic constraints of their bioclimatic niche. Fire in Australian savannas influences both the biophysical and biogeochemical processes at multiple scales from leaf to landscape. Here, we present the latest emission estimates from Australian savanna biomass burning and their contribution to global greenhouse gas budgets. We then review our understanding of the impacts of fire on ecosystem function and local surface water and heat balances, which in turn influence regional climate. We show how savanna fires are coupled to the global climate through the carbon cycle and fire regimes. We present new research that climate change is likely to alter the structure and function of savannas through shifts in moisture availability and increases in atmospheric carbon dioxide, in turn altering fire regimes with further feedbacks to climate. We explore opportunities to reduce net greenhouse gas emissions from savanna ecosystems through changes in savanna fire management. | Glob Change Biol, 21: 62–81. doi:10.1111/gcb.12686. | 2015 |  |
23 | Abramson, D., Krzhizhanovskaya, V., Lees, M. | Perspectives of the International Conference of Computational Science 2014 Computational Science has enabled a raft of science that was either impossible, dangerous, or extremely expensive. It is arguably one of the most multi-disciplinary research endeavors, and draws on foundational work in mathematics and computer science. Computational Science has applicability in almost all scientific domains, and is now an essential tool in many of these. This special section contains extended papers originally published in proceedings of the 14th International Conference on Computational Science (ICCS 2014), an annual event that promotes leading edge research. | Journal of Computational Science, Volume 10, September 2015, Pages 247-248, ISSN 1877-7503, http://dx.doi.org/10.1016/j.jocs.2015.08.007. | 2015 |  |
24 | Abramson, D. | FlashLite: A High Performance Machine for Data Intensive Science Data is predicted to transform the 21st century, fuelled by an exponential growth in the amount of data captured, generated and archived. Traditional high performance machines are optimized for numerical computing rather than IO performance or for supporting large memory applications. This paper discusses a new machine, called FlashLite, which addresses these challenges. The paper describes the motivation for the design, and discusses some driving application themes. | 21st IEEE International Conference on Parallel and Distributed Systems (ICPADS 2015), Melbourne, Australia, December 14 – 17, 2015. | 2015 |  |
25 | Abramson, D., Dinh, N, Chao, J., DeRose, L., Gontarek, A., Vose, A. and., Moench, B. | Relative debugging for a highly parallel hybrid computer system Relative debugging traces software errors by comparing two executions of a program concurrently - one code being a reference version and the other faulty. Relative debugging is particularly effective when code is migrated from one platform to another, and this is of significant interest for hybrid computer architectures containing CPUs accelerators or coprocessors. In this paper we extend relative debugging to support porting stencil computation on a hybrid computer. We describe a generic data model that allows programmers to examine the global state across different types of applications, including MPI/OpenMP, MPI/OpenACC, and UPC programs. We present case studies using a hybrid version of the `stellarator' particle simulation DELTA5D, on Titan at ORNL, and the UPC version of Shallow Water Equations on Crystal, an internal supercomputer of Cray. These case studies used up to 5,120 GPUs and 32,768 CPU cores to illustrate that the debugger is effective and practical. | IEEE Supercomputing 2015, Austin Texas, 16th – 20th November, 2015 | 2015 |  |
26 | Guru, S. M., Dwyer, R. G., Watts, M. E., Dinh, M. N., Abramson, D., Nguyen, H. A., Campbell, H. A., Franklin, C. E., Clancy, T. and Possingham H. P. | A Reusable Scientific workflow for conservation Planning In order to perform complex scientific data analysis, multiple software and skillsets are generally required. These analyses can involve collaborations between scientific and technical communities, with expertise in problem formulation and the use of tools and programming languages. While such collaborations are useful for solving a given problem, transferability and productivity of the approach is low and requires considerable assistance from the original tool developers. | MODSIM 2015, 29th Nov – 4th Dec 2015, Gold Coast, Australia. | 2015 |  |
27 | Simonov, A., Grosse, W., Mashkina, M., Bethwaite, B., Tan, J., Abramson, D., Wallace, G., Moulton, S., Bond, A. | New insights into the analysis of the electrode kinetics of flavin adenine dinucleotide redox centre of glucose oxidase immobilized on carbon electrodes New insights into electrochemical kinetics of the flavin adenine dinucleotide (FAD) redox center of glucose-oxidase (GlcOx) immobilized on reduced graphene oxide (rGO), single- and multiwalled carbon nanotubes (SW and MWCNT), and combinations of rGO and CNTs have been gained by application of Fourier transformed AC voltammetry (FTACV) and simulations based on a range of models. A satisfactory level of agreement between experiment and theory, and hence establishment of the best model to describe the redox chemistry of FAD, was achieved with the aid of automated e-science tools. Although still not perfect, use of Marcus theory with a very low reorganization energy (≤0.3 eV) best mimics the experimental FTACV data, which suggests that the process is gated as also deduced from analysis of FTACV data obtained at different frequencies. Failure of the simplest models to fully describe the electrode kinetics of the redox center of GlcOx, including those based on the widely employed Laviron theory is demonstrated, as is substantial kinetic heterogeneity of FAD species. Use of a SWCNT support amplifies the kinetic heterogeneity, while a combination of rGO and MWCNT provides a more favorable environment for fast communication between FAD and the electrode. | Langmuir, Feb 2014. DOI: 10.1021/la404872p. | 2014 |  |
28 | Konagurthu, A. S., Allison, L., Abramson, D. , Stuckey, P. J. and Lesk, M. | How precise are reported protein coordinate data? Atomic coordinates in the Worldwide Protein Data Bank (wwPDB) are generally reported to greater precision than the experimental structure determinations have actually achieved. By using information theory and data compression to study the compressibility of protein atomic coordinates, it is possible to quantify the amount of randomness in the coordinate data and thereby to determine the realistic precision of the reported coordinates. On average, the value of each Cα coordinate in a set of selected protein structures solved at a variety of resolutions is good to about 0.1 Å. | Acta Cryst. (2014). D70, 904-906, doi:10.1107/S1399004713031787. | 2014 |  |
29 | Dinh, M., Abramson, D., Chao, J. | Statistical assertion: a more powerful method for debugging scientific applications Traditional debuggers are of limited value for modern scientific codes that manipulate large complex data structures. Current parallel machines make this even more complicated, because the data structure may be distributed across processors, making it difficult to view/interpret and validate its contents. Therefore, many applications’ developers resort to placing validation code directly in the source program. This paper discusses a novel debug-time assertion, called a “Statistical Assertion”, that allows using extracted statistics instead of raw data to reason about large data structures, therefore help locating coding defects. In this paper, we present the design and implementation of an ‘extendable’ statistical-framework which executes the assertion in parallel by exploiting the underlying parallel system. We illustrate the debugging technique with a molecular dynamics simulation. The performance is evaluated on a 20,000 processor Cray XE6 to show that it is useful for real-time debugging. | Journal of Computational Science, Volume 5, issue 2, 126-134, March 2014. | 2014 |  |
30 | Vail, M., Tan, A., Murone, C., Abebe, D., Lee, F-T., Baer, M., Palath, V., Bebbington, C., Yarranton, G., Llerena, C., Garic, S., Abramson, D., Cartwright, G., Scott, A. and Lackmann, M. | Targeting EphA3 inhibits cancer growth by disrupting the tumor stromal microenvironment Eph receptor tyrosine kinases are critical for cell–cell communication during normal and oncogenic tissue patterning and tumor growth. Somatic mutation profiles of several cancer genomes suggest EphA3 as a tumor suppressor, but its oncogenic expression pattern and role in tumorigenesis remain largely undefined. Here, we report unexpected EphA3 overexpression within the microenvironment of a range of human cancers and mouse tumor xenografts where its activation inhibits tumor growth. EphA3 is found on mouse bone marrow–derived cells with mesenchymal and myeloid phenotypes, and activation of EphA3+/CD90+/Sca1+ mesenchymal/stromal cells with an EphA3 agonist leads to cell contraction, cell–cell segregation, and apoptosis. Treatment of mice with an agonistic α-EphA3 antibody inhibits tumor growth by severely disrupting the integrity and function of newly formed tumor stroma and microvasculature. Our data define EphA3 as a novel target for selective ablation of the tumor microenvironment and demonstrate the potential of EphA3 agonists for anticancer therapy. | Cancer Res. 2014 Aug 15;74(16):4470-81. doi: 10.1158/0008-5472.CAN-14-0218. | 2014 |  |
31 | Nguyen, H., Abramson, D. and Kipouros, T. | The WorkWays problem solving environment Science gateways allow computational scientists to interact with a complex mix of mathematical models, software tools and techniques, and high performance computers. Accordingly, various groups have built high-level problem-solving environments that allow these to be mixed freely. In this paper, we introduce an interactive workflow-based science gateway, called WorkWays. WorkWays integrates different domain specific tools, and at the same time is flexible enough to support user input, so that users can monitor and steer simulations as they execute. A benchmark design experiment is used to demonstrate WorkWays. | ICCS 2014, Cairns, June 10-12, 2014. | 2014 |  |
32 | Dinh, N, Abramson, D., Chao, J., Moench, B., Gontarek, A. and DeRose, L. | Support comparative debugging for large-scale UPC programs Relative debugging is a useful technique for locating errors that emerge from porting existing code to new programming language or to new computing platform. Recent attention on the UPC programming language has resulted in a number of conventional parallel programs, for example MPI programs, being ported to UPC. This paper gives an overview on the data distribution concepts used in UPC and establishes the challenges in supporting relative debugging technique for UPC programs that run on large supercomputers. The proposed solution is implemented on an existing parallel relative debugger ccdb, and the performance is evaluated on a Cray XE6 system with 16,348 cores. | ICCS 2014, Cairns, June 10-12, 2014. | 2014 |  |
33 | Dinh, M., Abramson, D., Chao, J. | Scalable relative debugging Detecting and isolating bugs that arise only at high processor counts is a challenging task. Over a number of years, we have implemented a special debugging method, called relative debugging, that supports debugging applications as they evolve or are ported to larger machines. It allows a user to compare the state of a suspect program against another reference version even as the number of processors is increased. The innovative idea is the comparison of runtime data to reason about the state of the suspect program. While powerful, a naïve implementation of the comparison phase does not scale to large problems running on large machines. In this paper, we propose two different solutions including a hash-based scheme and a direct point-to-point scheme. We demonstrate the implementation, a case study, as well as the performance, of our techniques on 20K cores of a Cray XE6 system. | IEEE Transactions on Parallel and Distributed Systems, Volume 25, issue 3, 740-74, March 2013. | 2013 |  |
34 | Płóciennik, M, Bartek, T., Owsiak, O., Altintas, I., Wang, J., Crawl, D., Abramson, D., Imbeaux, F., Guillerminet, B., Frauel, Y., Lopez-Caniego, M., Plasencia, I., Pych, W., Ciecielag, P., | Approaches to Distributed Execution of Scientific Workflows in Kepler The Kepler scientific workflow system enables creation, execution and sharing of workflows across a broad range of scientific and engineering disciplines while also facilitating remote and distributed execution of workflows. In this paper, we present and compare different approaches to distributed execution of workflows using the Kepler environment, including a distributed data-parallel framework using Hadoop and Stratosphere, and Cloud and Grid execution using Serpens, Nimrod/K and Globus actors. We also present real-life applications in computational chemistry, bioinformatics and computational physics to demonstrate the usage of different distributed computing capabilities of Kepler in executable workflows. We further analyze the differences of each approach and provide a guidance for their applications. | Fundamenta Informaticae, IOS Press, Volume 128, Number 3 / 2013, pp 281-302. | 2013 |  |
35 | Dinh, M., Abramson, D., Chao, J., Gontarek, A., Moench, B. and DeRose, L. | A data-centric framework for debugging highly parallel applications Contemporary parallel debuggers allow users to control more than one processing thread while supporting the same examination and visualisation operations of that of sequential debuggers. This approach restricts the use of parallel debuggers when it comes to large scale scientific applications run across hundreds of thousands compute cores. First, manually observing the runtime data to detect error becomes impractical because the data is too big. Second, performing expensive but useful debugging operations becomes infeasible as the computational codes become more complex, involving larger data structures, and as the machines become larger. This study explores the idea of a data-centric debugging approach, which could be used to make parallel debuggers more powerful. It discusses the use of ad hoc debug-time assertions that allow a user to reason about the state of a parallel computation. These assertions support the verification and validation of program state at runtime as a whole rather than focusing on that of only a single process state. Furthermore, the debugger's performance can be improved by exploiting the underlying parallel platform because the available compute cores can execute parallel debugging functions, while a program is idling at a breakpoint. We demonstrate the system with several case studies and evaluate the performance of the tool on a 20 000 cores Cray XE6. | Software: Practice and Experience, Nov 2013. | 2013 |  |
36 | Lo, Y.H., Peachey, T., Abramson, D., and Michailova, A. | Sensitivity of rabbit ventricular action potential and Ca2+ dynamics to small variations in membrane currents and ion diffusion coefficients Little is known about howsmall variations in ionic currents and Ca2+ and Na+ diffusion coefficients impact action potential and Ca2+ dynamics in rabbit ventricularmyocytes.We applied sensitivity analysis to quantify the sensitivity of Shannon et al.model (Biophys. J., 2004) to 5%–10% changes in currents conductance, channels distribution, and ion diffusion in rabbit ventricular cells. We found that action potential duration and Ca2+ peaks are highly sensitive to 10% increase in L-type Ca2+ current; moderately influenced by 10% increase in Na+-Ca2+ exchanger, Na+-K+ pump, rapid delayed and slow transient outward K+ currents, and Cl− background current; insensitive to 10% increases in all other ionic currents and sarcoplasmic reticulum Ca2+ fluxes. Cell electrical activity is strongly affected by 5% shift of L-type Ca2+ channels and Na+-Ca2+ exchanger in between junctional and submembrane spaces while Ca2+-activated Cl−-channel redistribution has the modest effect. Small changes in submembrane and cytosolic diffusion coefficients for Ca2+, butnot inNa+ transfer,may alter notablymyocyte contraction.Our studies highlight the need formore precise measurements and further extending and testing of the Shannon et al. model. Our results demonstrate usefulness of sensitivity analysis to identify specific knowledge gaps and controversies related to ventricular cell electrophysiology and Ca2+ signaling. | BioMed Research International, Volume 2013, Article ID 565431. | 2013 |  |
37 | Pettit, C.J., Williams, S., Bishop, I.D., Aurambout, J., Russel, A.B.M., Michael, A., Sharma, S., Hunter, D., Chan, P., Enticott, C.M., Borda, A., Abramson, D.A. | Building an ecoinformatics platform to support climate change adaptation in Victoria Our research is focused on developing an ecoinformatics platform to support climate change adaptation in Victoria. A multi-disciplinary, cross-organisational approach was taken in developing a platform of collaboration to support the understanding of climate change impact and the formulation of adaptation strategies.The platform comprises a number of components including: (i) a metadata discovery tool to support modelling, (ii) a workflow framework for connecting climate change models, (iii) geographical visualisation tools for communicating landscape and farm impacts, (iv) a landscape object library for storing and sharing digital objects, (v) a landscape constructor tool to support participatory decision-making, and (vi) an online collaboration space for supporting multi-disciplinary research and cross-organisational collaboration.In this paper we present the platform as it has been developed to support collaborative research and to inform stakeholders of the likely impacts of climate change in southwest Victoria, Australia. We discuss some of the drivers for research in developing the ecoinformatics platform and its components. We conclude by identifying some future research directions in better connecting researchers and communicating scientific outcomes in the context of climate change impact and adaptation. | Future Generation Computer Systems, vol 29, issue 2, Elsevier Science, Amsterdam Netherlands, pp. 624-640. | 2013 |  |
38 | Sher, A.A., Wang, K., Wathen, A., Maybank, P., Mirams, G., Abramson, D.A., Noble, D., Gavaghan, D.J. | A local sensitivity analysis method for developing biological models with identifiable parameters: Application to cardiac ionic channel modeling Computational cardiac models provide important insights into the underlying mechanisms of heart function. Parameter estimation in these models is an ongoing challenge with many existing models being overparameterised. Sensitivity analysis presents a key tool for exploring the parameter identifiability. While existing methods provide insights into the significance of the parameters, they are unable to identify redundant parameters in an efficient manner. We present a new singular value decomposition based algorithm for determining parameter identifiability in cardiac models. Using this local sensitivity approach, we investigate the Ten Tusscher 2004 rapid inward rectifier potassium and the Mahajan 2008 rabbit L-type calcium currents in ventricular myocyte models. We identify non-significant and redundant parameters and improve the models by reducing them to minimum ones that are validated to have only identifiable parameters. The newly proposed approach provides a new method for model validation and evaluation of the predictive power of cardiac models. | Future Generation Computer Systems [P], vol 29, issue 2, Elsevier Science, Amsterdam Netherlands, pp. 591-598. | 2013 |  |
39 | Smanchat, S., Indrawan, M., Ling, S., Enticott, C. and Abramson, D. | Scheduling Parameter Sweep Workflow in the Grid Based on Resource Competition Workflow technology has been adopted in scientific domains to orchestrate and automate scientific processes in order to facilitate experimentation. Such scientific workflows often involve large data sets and intensive computation that necessitate the use of the Grid. To execute a scientific workflow in the Grid, tasks within the workflow are assigned to Grid resources. Thus, to ensure efficient execution of the workflow, Grid workflow scheduling is required to manage the allocation of Grid resources. Although many Grid workflow scheduling techniques exist, they are mainly designed for the execution of a single workflow. This is not the case with parameter sweep workflows, which are used for parametric study and optimisation. A parameter sweep workflow is executed numerous times with different input parameters in order to determine the effect of each parameter combination on the experiment. While executing multiple instances of a parameter sweep workflow in parallel can reduce the time required for the overall execution, this parallel execution introduces new challenges to Grid workflow scheduling. Not only is a scheduling algorithm that is able to manage multiple workflow instances required, but this algorithm also needs the ability to schedule tasks across multiple workflow instances judiciously, as tasks may require the same set of Grid resources. Without appropriate resource allocation, resource competition problem could arise. We propose a new Grid workflow scheduling technique for parameter sweep workflow called the Besom scheduling algorithm. The scheduling decision of our algorithm is based on the resource dependencies of tasks in the workflow, as well as conventional Grid resource-performance metrics. In addition, the proposed technique is extended to handle loop structures in scientific workflows without using existing loop-unrolling techniques. The Besom algorithm is evaluated using simulations with a variety of scenarios. A comparison between the simulation results of the Besom algorithm and of the three existing Grid workflow scheduling algorithms shows that the Besom algorithm is able to perform better than the existing algorithms for workflows that have complex structures and that involve overlapping resource dependencies of tasks. | Future Generation Computer Systems 29 (2013) 1164–1183 | 2013 |  |
40 | Mashkina, E., Peachey, T., Lee, CY., Bond, A., Kennedy, G., Enticott, C., Abramson, D. and Elton, D. | Estimation of electrode kinetic and uncompensated resistance parameters 3 and insights into their significance using Fourier transformed ac voltammetry and e-science software tools In transient forms of voltammetry, quantitative analysis of electrode kinetics and parameters such as uncompensated resistance (Ru) and double layer capacitance (Cdl) are usually undertaken by comparing experimental and simulated data. Commonly, the skill of the experimentalist is heavily relied upon to decide when a good fit of simulated to experimental data has been achieved. As an alternative approach, it is now shown how data analysis can be based on implementation of e-science software tools. Previ- ously, a standard heuristic data analysis approach applied to the oxidation of ferrocene in acetonitrile (0.1 M Bu4NPF6) at a glassy carbon electrode using higher order harmonics available in Fourier trans- formed ac voltammetry implied that the heterogeneous charge transfer rate constant k0 is P0.25 cm s1 with the charge transfer coefficient (a) lying in the range of 0.25–0.75. Application of e-science software tools to the same data set allows a more meaningful understanding of electrode kinetic data to be pro- vided and also offers greater insights into the sensitivity of the IRu (Ohmic drop) on these parameters. For example, computation of contour maps based on a sweep of two sets of parameters such as k0 and Ru or a and k0 imply that a is 0.50 ± 0.05 and that k0 lays in the range 0.2–0.4 cm s1 with Ru around 130 Ohm. Quantitative evaluation of k0, a and Ru for the quasi-reversible 1⁄2FeðCNÞ63 þ e 1⁄2FeðCNÞ64 process at a glassy carbon electrode in aqueous media is also facilitated by use of e-science software tools. In this case, when used in combination with large amplitude Fourier transformed ac voltammetry, it is found for each harmonic that k0 for the electrode process lies close to 0.010 cm s1, a is 0.50 ± 0.05 and Ru is 610 Ohm. | Journal of Electroanalytical Chemistry 690 (2013) 104‚Äì110. | 2013 |  |
41 | Huynh, M., Jin, C., Bethwaite, B., Abramson, D., Papadopoulos, P. and Clementi, L. | Improved Virtual Machine Cloud Provisioning The process of provisioning Virtual Machines (VM) into the cloud requires users to conduct a series of (often manual) operations to configure and deploy their VM instances. Commercial clouds often discard all changes made to a particular running VM instance at shutdown. This means that every time system software or application requirements change, the entire re-configuration process must be repeated. A variety of system configuration techniques, like Cfengine, Puppet and Chef, can be employed to automate this process and make it less error prone. However, they cannot efficiently expand local existing infrastructures into the Cloud. This paper describes one method to automate this process by taking advantage of the Rocks Cluster Toolkit (a cluster management and deployment suite) to enable users to author and test compatible VM images in a local environment and then upload fully-configured VMs to a commercial cloud. It explores techniques to improve the performance of the entire process: the speed of transferring VM images from a local environment to a working Amazon EC2 instance. By utilizing UDT (a UDP based application level data transport protocol) and some features of Amazon’s Elastic Block Store, our approach reduces a VMs provisioning time by 35% with a very modest increase in cost. Moreover, it allows users to make an informed decision about which provisioning technique should be used based upon a performance/cost criterion. | Cloud Asia, 2013, 14-17th May 2013, Singapore. | 2013 |  |
42 | Chao, J., Ding, L. and Abramson, D. | Extending the Eclipse Parallel Tools Platform debugger with Scalable Parallel Debugging Library The Eclipse Parallel Tools Platform (PTP) is an open source Integrated Development Environment (IDE) aiding the development of Supercomputer applications. The PTP parallel debugger is used by a growing community of developers in scientific and engineering fields. This paper proposes a method of improving the communication infrastructure of the PTP debugger by taking advantage of a Scalable Parallel Debugging Library (SPDL). Unlike the present communication framework of PTP, the Scalable Debug Manager (SDM), SPDL provides a pluggable architecture that allows developers to select a communication protocol suitable for a targeted supercomputer. It currently supports a number of scalable protocols, including MRNet and SCI. The advanced features provided by these communication trees, like programmable filters and configurable topologies, allow developers to create more flexible solutions of efficient reduction and aggregation operations for parallel debugging. In particular, they allow parallel debuggers to handle the large amounts of back-end messages in peta-scale environments with better efficiency. The architecture of the PTP debugger is extended to support SPDL. The extended architecture combines the advantages of the PTP debugger at the front-end and SPDL at the back-end. It improves the scalability and performance of the PTP debugger. Consequently, it provides a flexible option of utilizing the PTP debugger with pluggable communication protocols to address the debugging challenges in peta-scale environments. | ICCS 2013, Barcelona, Spain, June 5 – 7, 2013. | 2013 |  |
43 | Dinh, M., Abramson, D., Chao, J., Kurniawan, D., Gontarek, A., Moench, B. and DeRose, L. | Scalable parallel debugging with statistical assertions Traditional debuggers are of limited value for modern scientific codes that manipulate large complex data structures. This paper discusses a novel debug-time assertion, called a “Statistical Asser- tion”, that allows a user to reason about large data structures, and the primitives are parallelised to provide an efficient solution. We present the design and implementation of statistical assertions, and illustrate the debugging technique with a molecular dynamics simulation. We evaluate the performance of the tool on a 12,000 cores Cray XE6. | in 17th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP) – Poster, New Orleans, LA, USA, pp. 311-312. | 2012 |  |
44 | Chao, J, Abramson, D., Dinh, M., Kurniawan, D., Gontarek, A, Moench, B. and DeRose, L. | A Scalable Parallel Debugging Library with Pluggable Communication Protocols Parallel debugging faces challenges in both scalability and efficiency. A number of advanced methods have been invented to improve the efficiency of parallel debugging. As the scale of system increases, these methods highly rely on a scalable communication protocol in order to be utilized in large-scale distributed environments. This paper describes a debugging middleware that provides fundamental debugging functions supporting multiple communication protocols. Its pluggable architecture allows users to select proper communication protocols as plug-ins for debugging on different platforms. It aims to be utilized by various advanced debugging technologies across different computing platforms. The performance of this debugging middleware is examined on a Cray XE Supercomputer with 21,760 CPU cores. | CCGrid 2012, The 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, May 13-16, 2012, Ottawa, Canada. | 2012 |  |
45 | Killeen, N., Lohrey, J., Farrell, M., Liu, W., Garic, S., Abramson, D. and Nguyen, H. | Integration of modern data management practice with scientific workflows Modern science increasingly involves managing and processing large amounts of distributed data accessed by global teams of researchers. To do this, we need systems that combine data, meta-data and workflows into a single system. This paper discusses such a system, built from a number of existing technologies. We demonstrate the effectiveness on a case study that analyses MRI data. | 8th IEEE International Conference on e-Science, October 8-12, 2012, Chicago, IL, USA. | 2012 |  |
46 | Nguyen, H and Abramson, D. | WorkWays: Interactive Workflow-based Science Gateways Workflow-based science gateways that bring the power of scientific workflows to the Web are becoming increasingly popular. Different IO models enabling interactions between a running workflow and web portal have been explored. However, these are typically not dynamic enough to allow users to insert data into, or export data out of, a continuously running workflow. In this paper, we present a novel IO model, which supports dynamic interaction between a workflow and its portal. We discuss a use case in which web portal are used to control the execution of scientific workflows. This IO model will be part of our workflow-based science gateway named WorkWays. | 8th IEEE International Conference on e-Science, October 8-12, 2012, Chicago, IL, USA. | 2012 |  |
47 | Chambers, J., Bethwaite, B., Diamond, N., Peachey, T., Abramson, D., Petrou, S. and Thomas, E. | Parametric computation predicts a multiplicative interaction between synaptic strength parameters that controls properties of gamma oscillations Gamma oscillations are thought to be critical for a number of behavioral functions, they occur in many regions of the brain and through a variety of mechanisms. Fast repetitive bursting (FRB) neurons in layer 2 of the cortex are able to drive gamma oscillations over long periods of time. Even though the oscillation is driven by FRB neurons, strong feedback within the rest of the cortex must modulate properties of the oscillation such as frequency and power. We used a highly detailed model of the cortex to determine how a cohort of 33 parameters controlling synaptic drive might modulate gamma oscillation properties. We were interested in determining not just the effects of parameters individually, but we also wanted to reveal interactions between parameters beyond additive effects. To prevent a combinatorial explosion in parameter combinations that might need to be simulated, we used a fractional factorial design (FFD) that estimated the effects of individual parameters and two parameter interactions. This experiment required only 4096 model runs. We found that the largest effects on both gamma power and frequency came from a complex interaction between efficacy of synaptic connections from layer 2 inhibitor y neurons to layer 2 excitator y neurons and the parameter for the reciprocal connection. As well as the effect of the individual parameters determining synaptic efficacy, there was an interaction between these parameters beyond the additive effects of the parameters alone. The magnitude of this effect was similar to that of the individual parameters, predicting that it is physiologically important in setting gamma oscillation properties. | Frontiers in Computational Neuroscience, Volume 6, Article 53, doi: 10.3389/fncom.2012.00053, July 2012. | 2012 |  |
48 | Yu, H.L. , Tan, J. and Abramson, D. | Solving Optimization Problems in Nimrod/OK using a Genetic Algorithm A scientific workflow can be viewed as formal model of the flow of data between processing components. It often involves a combination of data integration, computation, analysis, and visualization steps. An emerging use case involves determining some input parameters that minimize (or maximize) the output of a computation. Kepler is a good framework for specifying such optimizations because arbitrary computations can be composed into a pipeline, which is then repeated until an optimal set of inputs is found. Genetic Algorithms are generic optimization algorithms based on the principles of genetics and natural selection, and are well suited for models with discontinuous objective functions. This paper discusses an implementation of a Genetic Algorithm in Kepler, building on the Nimrod/OK framework. The resulting tool is generic and flexible enough to support a variety of experimental domains. The paper reports a number of experiments that demonstrate the performance with a set of benchmarking functions. | ICCS 2012, Omaha, Nebraska, June 3-6. | 2012 |  |
49 | Dinh, M., Abramson, D., Chao, J., Kurniawan, D., Gontarek, D., Moench, B. and DeRose, L. | Debugging Scientific Applications With Statistical Assertions Traditional debuggers are of limited value for modern scientific codes that manipulate large complex data structures. Current parallel machines make this even more complicated, because the data may be distributed across multiple processors, making it difficult to view, interpret and validate the contents of a distributed structure. As a result, many applications’ developers resort to placing validation and display code directly in the source program itself. This paper discusses a novel debug-time assertion, called a “Statistical Assertion”, that allows a user to reason about large data structures. We present the design and implementation of statistical assertions, and illustrate the debugging technique with a molecular dynamics simulation. We evaluate the performance of the system on a 12,000 processor Cray XE6, and show that it is useful for real time debugging. | in International Conference on Computational Science (ICCS), Omaha, Nebraska, USA. | 2012 |  |
50 | Kipouros, T., Peachey, T., Abramson, D and Savill, M. | Enhancing and Developing the Practical Optimisation Capabilities and Intelligence of Automatic Design Software Tooltip content | 8th AIAA Multidisciplinary Design Optimization Specialist Conference, 23 – 26 April, Honolulu, Hawaii, AIAA-2012-1677. | 2012 |  |
51 | Peachey, T.C., Riley, M. J. W., Abramson, D. and Stewart, J. | A Simplex-like Search Method for Bi-objective Optimization We describe a new algorithm for bi-objective optimization, similar to the Nelder Mead simplex algorithm, widely used for single objective optimization. For dierentiable bi-objective functions on a continuous search space, internal Pareto optima occur where the two gradient vectors point in opposite directions. So such optima may be located by minimizing the cosine of the angle between these vectors. This requires a complex rather than than a simplex, so we term the technique the cosine seeking complex. An extra benet of this approach is that a successful search identies the direction of the ecient curve of Pareto points, expediting further searches. Results are presented for some standard test functions. The method presented is quite complicated and space considerations here preclude complete details. We hope to publish a fuller description in another place. | EngOpt 2012 – International Conference on Engineering Optimization, Rio de Janeiro, Brazil, 1-5 July 2012. | 2012 |  |
52 | Peachey, T.C., Diamond, N.T., Abramson, D. and Enticott, C.M. | Computing Factorial Designs for Large Numbers of Factors with 2^m levels. A new algorithm is presented for fractional factorial experimental design. It is faster than existing algorithms and hence can provide designs with more factors than previous designs. Such large experiments are required in the new field of parametric computing. | | 2012 |  |
53 | Peachey, T.C., Abramson, D., Lewis, A. | Heuristics for Parallel Simulated Annealing by Speculation. This paper considers parallel execution of the standard simulated annealing algorithm using speculative computing. Various heuristics for estimating the probabilities of move acceptance are advanced and their performance compared experimentally. Some are found to be superior to methods previously published. | | 2012 |  |
54 | S. Smanchat, M. Indrawan, S. Ling, C. Enticott, D. Abramson, | A Scheduler based on Resource Competition for Parameter Sweep Workflow, Grid workflow scheduling has been a prevalent field of research in order to allocate scientific workflow tasks to grid resources. To actuate these grid workflow scheduling algorithms, schedulers need to be developed for grid workflow management systems. A scheduler is a component that gathers information, such as estimated execution times and lists of available grid resources, as inputs for scheduling algorithms. Once a grid schedule is generated, the scheduler uses it to allocate grid resources to the tasks in the workflow. This is even more complicated for parameter sweep workflow scheduling. As parameter sweep workflows are repeatedly executed a number of times with different inputs, to schedule them in parallel, the scheduler must be able to handle multiple workflow instances and multiple scheduling iterations. In this paper, we present a scheduling algorithm for parameter sweep workflows and suggest an implementation of a scheduler for parameter sweep workflows based on the algorithms. We highlight the implementation issues encountered in our experience of scheduler development. | Proceedings of the International Conference on Computational Science, ICCS, 1-3 June 2011, pp. 176-185. | 2011 |  |
55 | Dinh, M., Abramson, D., Kurniawan, D., Moench, B. and DeRose, L. | Assertion based parallel debugging Programming languages have advanced tremendously over the years, but program debuggers have hardly changed. Sequential debuggers do little more than allow a user to control the flow of a program and examine its state. Parallel ones support the same operations on multiple processes, which are adequate with a small number of processors, but become unwieldy and ineffective on very large machines. Typical scientific codes have enormous multi-dimensional data structures and it is impractical to expect a user to view the data using traditional display techniques. In this paper we discuss the use of debug-time assertions, and show that these can be used to debug parallel programs. The techniques reduce the debugging complexity because they reason about the state of large arrays without requiring the user to know the expected value of every element. Assertions can be expensive to evaluate, but their performance can be improved by running them in parallel. We demonstrate the system with a case study finding errors in a parallel version of the Shallow Water Equations, and evaluate the performance of the tool on a 4,096 cores Cray XE6. | CCGrid, 2011, Newport Beach, CA, 24-26th May, 2011. | 2011 |  |
56 | Abramson, D. A. | Applications Development for the Computational Grid This Doctorate of Science thesis tracks the development, since the mid 1990’s, in designing solutions that support the software lifecycle of Grid software. Under my supervision, my research students and associates have designed new approaches, and built exemplars that demonstrate the effectiveness of the ideas. We have applied these tools to real world e-Science problems. The work is divided into 14 chapters that focus on issues in the development, deployment, debugging and execution of e-Science applications. The thesis tells a story, moving from earlier, less sophisticated, ideas and tools, through to more advanced and effective approaches. Each chapter deals with a particular approach, and highlights the software tools that embody the ideas. The collected chapters both propose new techniques and strategies, but also illustrate these with real software solutions. As a result, new tools, such as the Nimrod family of development tools and the Guard debugging tools, have been built and applied to real applications. Thus, this work has demonstrates its applicability not only through a set of well-cited papers, but also by user adoption. | D.Sc. Thesis, Faculty of Science, Monash University. | 2011 |  |
57 | Peachey, T., Mashkina, E., Lee, C., Enticott, C., Abramson, D., Bond, A., Elton, D., Gavaghan, D., Stevenson, G., Kennedy, G. | Leveraging e-Science Infrastructure for Electrochemistry Research As in many scientific disciplines, modern chemistry involves a mix of experimentation and computer-supported theory. Historically, these skills have been provided by different groups, and range from traditional ‘wet’ laboratory science to advanced numerical simulation. Increasingly, progress is made by global collaborations, in which new theory may be developed in one part of the world and applied and tested in the laboratory elsewhere. e-Science, or cyber-infrastructure, underpins such collaborations by providing a unified platform for accessing scientific instruments, computers and data archives, and collaboration tools. In this paper we discuss the application of advanced e-Science software tools to electrochemistry research performed in three different laboratories – two at Monash University in Australia and one at the University of Oxford in the UK. We show that software tools that were originally developed for a range of application domains can be applied to electrochemical problems, in particular Fourier voltammetry. Moreover, we show that, by replacing ad-hoc manual processes with e-Science tools, we obtain more accurate solutions automatically. | Phil. Trans. R. Soc. A, AHM 2011, 10.1098/rsta.2011.0146, Phil. Trans. R. Soc. A, 28 August 2011 vol. 369 no. 1949 pp 3336-3352. | 2011 |  |
58 | Spencer, D., Zimmerman, A. and Abramson, D. | Project Management in E-Science: Challenges and Opportunities In this introduction to the special theme: Project Management in e-Science: Challenges and Opportunities, we argue that the role of project management and different forms of leadership and facilitation can influence significantly the nature of cooperation and its outcomes and deserves further research attention. The quality of social interactions such as communication, cooperation, and coordination, have emerged as key factors in developing and deploying e-science infrastructures and applications supporting large-scale and distributed collaborative scientific research. If software is seen to embody the relational web within which it evolves, and if the processes of software design, development and deployment are seen as ongoing transformations of this dynamic web of relationships between technology, people and environment, the role of managers becomes crucial: it is their responsibility to balance and facilitate the dynamics of these relationships. | Computer Supported Cooperative Work (CSCW), DOI: 10.1007/s10606-011-9140-4, June 2011, Springer | 2011 |  |
59 | Spencer, D., Zimmerman, A. and Abramson, D. | Project Management in E-Science: Challenges and Opportunities In this introduction to the special theme: Project Management in e-Science: Challenges and Opportunities, we argue that the role of project management and different forms of leadership and facilitation can influence significantly the nature of cooperation and its outcomes and deserves further research attention. The quality of social interactions such as communication, cooperation, and coordination, have emerged as key factors in developing and deploying e-science infrastructures and applications supporting large-scale and distributed collaborative scientific research. If software is seen to embody the relational web within which it evolves, and if the processes of software design, development and deployment are seen as ongoing transformations of this dynamic web of relationships between technology, people and environment, the role of managers becomes crucial: it is their responsibility to balance and facilitate the dynamics of these relationships. | Computer Supported Cooperative Work (CSCW), DOI: 10.1007/s10606-011-9140-4, June 2011, Springer | 2011 |  |
60 | Kurniawan, D. and Abramson, D. | ISENGARD: an Infrastructure for Supporting e-Science and Grid Application Development Grid computing facilitates the aggregation and coordination of resources that are distributed across multiple administrative domains for large-scale and complex e-Science experiments. Writing, deploying, and testing grid applications over highly heterogeneous and distributed resources are complex and challenging. The process requires grid-enabled programming tools that can handle the complexity and scale of the infrastructure. However, while a large amount of research has been undertaken into grid middleware, little work has been directed specifically at the area of grid application development tools. This paper presents the design and implementation of ISENGARD, an infrastructure for supporting e-Science and grid application development. ISENGARD provides services, tools, and APIs that simplify grid software development. Copyright © 2010 John Wiley & Sons, Ltd. | Concurrency and Computation Practice and Experience, Volume 23, Issue 4, pp 390‚Äì414, 25 March 2011. | 2011 |  |
61 | Abramson, D., Bethwaite, B., Enticott, C., Garic, S. and Peachey, T. | Parameter Exploration in Science and Engineering using Many-Task Computing Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel on distributed resources), and may involve complete state space exploration, experimental design, or numerical optimization techniques. Many-Task Computing (MTC) provides a framework for performing robust design, because it supports the execution of a large number of otherwise independent processes. Further, scientific workflow engines facilitate the specification and execution of complex software pipelines, such as those found in real science and engineering design problems. However, most existing workflow engines do not support a wide range of experimentation techniques, nor do they support a large number of independent tasks. In this paper, we discuss Nimrod/K—a set of add in components and a new run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged dataflow concepts, developed in 1980s for highly parallel machines. This is embodied in a new Kepler “Director” that supports many-task computing by orchestrating execution of tasks on on clusters, Grids, and Clouds. Further, Nimrod/K provides a set of “Actors” that facilitate the various modes of parameter exploration discussed above. We demonstrate the power of Nimrod/K to solve real problems in cardiac science. | Special issue of IEEE Transactions on Parallel and Distributed Systems on Many-Task Computing, June 2011, Volume: 22 Issue: 6, 960 ‚Äì 973. | 2011 |  |
62 | Nguyen, H and Abramson, D. Bethwaite, B., Dinh, M., Enticott, C., Firth, S., Garic, S., Harper, I., Lackmann, M., Russel, A.B.M, Schek, S. and Vail, M. | Integrating Scientific Workflows and Large Tiled Display Walls: Bridging the Visualization Divide Modern in-silico science (or e-Science) is a complex process, often involving multiple steps conducted across different computing environments. Scientific workflow tools help scientists automate, manage and execute these steps, providing a robust and repeatable research environment. Increasingly workflows generate data sets that require scientific visualization, using a range of display devices such as local workstations, immersive 3D caves and large display walls. Traditionally, this display step handled outside the workflow, and output files are manually copied to a suitable visualization engine for display. This inhibits the scientific discovery process disconnecting the workflow that generated the data from the display and interpretation processes. In this paper we present a solution that links scientific workflows with a variety of display devises, including large tiled display walls. We demonstrate the feasibility of the system by a prototype implementation that leverages the Kepler workflow engine and the SAGE display software. We illustrate the use of the system with a case study in workflow driven microscopy. | Fourth International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2), co-located with 2011 International Conference on Parallel Processing (ICPP-2011), Taipei, Taiwan September 13-16, 2011. | 2011 |  |
63 | Abramson, D., Arzberger, P., Wienhausen, G., Date, S., Lin, F-P., Nan, K. and Shimojo, S. | Cyberinfrastructure Internship and its application to e-Science Universities are constantly searching for ways that prepare students as effective global professionals. At the same time, cyberinfrastructure leverages computing, information, and communication technology to perform research, often in an international context. In this paper we discuss a novel model, called the Cyberinfrastructure Internship Program (CIP), which serves both of these goals. Specifically, students apply or develop cyberinfrastructure to solve challenging research problems, but they do this via summer internships abroad. CIP has been implemented at three different Universities: The University of California San Diego in the US, Osaka University in Japan and and Monash University in Australia. We discuss details of the schemes and provide some initial evaluations of their success. | e-Science 2011, Stockholm, Dec 2011. | 2011 |  |
64 | Arzberger, P., Wienhausen, G., Abramson, D., Date, S., Lin, F-P., Nan, K. and Shimojo, S. | PRIME: an integrated and sustainable undergraduate international research program Recently we have seen an increase in the calls for universities and the education community to re-think undergraduate education and create opportunities that prepare students as effective global professionals. The key motivator is the need to build a research and industrial workforce that works collaboratively across cultures and disciplines to address major global challenges. At the same time, computing, information, and communication technology facilitates a comprehensive ‘cyberinfrastructure’ on which new types of scientific and engineering knowledge environments and organizations can be constructed. We describe our four-year pilot experience with the Pacific Rim Experiences for Undergraduates (PRIME) project. The goals of PRIME are to: develop an integrated and sustainable undergraduate international research program that serves as a model for 21st Century undergraduate education; prepare students to become effective global professionals and citizens; and, give students a head-start on careers in science, engineering and technology research. We discuss the design and motivation for the scheme, salient implementation details, outcomes to date and discuss challenges of scalability and sustainability. | Advances in Engineering Education, American Society for Engineering Education, Summer 2010, Volume 2, Number 2. | 2010 |  |
65 | Riley, M. J. W., Peachey, T., Abramson D., and Jenkins, K. W. | Multi-objective engineering shape optimization using differential evolution interfaced to the Nimrod/O tool This paper presents an enhancement of the Nimrod/O optimization tool by interfacing DEMO, an external multiobjective optimization algorithm. DEMO is a variant of differential evolution – an algorithm that has attained much popularity in the research community, and this work represents the first time that true multiobjective optimizations have been performed with Nimrod/O. A modification to the DEMO code enables multiple objectives to be evaluated concurrently. With Nimrod/O’s support for parallelism, this can reduce the wall-clock time significantly for compute intensive objective function evaluations. We describe the usage and implementation of the interface and present two optimizations. The first is a two-objective mathematical function in which the Pareto front is successfully found after only 30 generations. The second test case is the three-objective shape optimization of a rib-reinforced wall bracket using the Finite Element software, Code_Aster. The interfacing of the already successful packages of Nimrod/O and DEMO yields a solution that we believe can benefit a wide community, both industrial and academic. | IOP Conference Series: Materials Science and Engineering, Volume 10, Article Number 012189. | 2010 |  |
66 | Bethwaite, B., Abramson, D., Bohnert, F., Garic, S., Enticott, C., Peachey, T. | Mixing the Grid and Clouds: High-throughput Science using the Nimrod Tool Family The Nimrod tool family facilitates high-throughput science by allowing researchers to explore complex design spaces using computational models. Users are able to describe large experiments in which models are executed across changing input parameters. Different members of the tool family support complete and partial parameter sweeps, numerical search by non-linear optimisation, and even workflows. In order to provide timely answers, and to enable very large experiments, distributed computational resources are aggregated to form a logically sin-gle high-throughput engine. To date, we have leveraged grid middleware standards to spawn computations on remote machines. Recently we added an interface to Amazon’s Elastic Compute Cloud (EC2), allowing users to mix conventional grid resources and clouds. A range of schedulers, from round-robin queues to those based on economic budgets, allow Nimrod to mix and match resources. This provides a powerful platform for computational researchers, because they can use a mix of university level infrastructure and commercial clouds. In particular, the system allows a user to pay money to increase the quality of the research out-comes, and to decide exactly how much they want to pay to achieve a given re-turn. In this chapter we will describe Nimrod and its architecture, and show how this naturally scales to incorporate clouds. We will illustrate the power of the system using a case study, and will demonstrate that cloud computing has the potential to enable high-throughput science. | | 2010 |  |
67 | Tan, J., Abramson, D. and Enticott, C. | Firewall Traversal in the Grid Architecture Computational grids have been at the forefront of e-science supercomputing for several years now. Many issues have been settled over the years, but some may only appear to be so. One of these issues is firewall traversal. Many solutions have been proposed and developed. We have developed two such solutions ourselves: remus and Romulus, but like all other solutions, they are limited in application. Others are working on proposed standards or solutions based on existing internet standards and RFCs. However, we still have production -level grids that instead operate their grid resources on an open firewall policy. Some propose moving grids on top of peer-to-peer networks and/or overlay networks, or rebuilding grids on top of clouds instead.Existing grid infrastructures have not rushed to follow either path, however, as the required changes will take considerable effort and cost for currently running systems. This paper investigates the problem and offers a different proposal: a minor revision to the grid architecture. In order to support what we propose, we will look at several proposed solutions and identify their limitations. We also classify them into two distinct approaches, and discuss how each one is not by itself sufficient for all situations. Then we shall show that a slight improvement to the grid protocol architecture provides a multi-pronged architectural solution. | 12th IEEE International Conference on High Performance Computing and Communications (HPCC-2010), Melbourne, Australia, September 1-3, 2010. | 2010 |  |
68 | Pettit, C., Russel, A.B.M., Michael, A., Aurambout, J-P., Sharma, S., Williams S., Hunter, D Chan, P.C., Borda, A., Bishop, I., Abramson, D. | Realising an eScience platform to support climate change adaptation in Victoria, IEEE e-Science 2010, December 6th ‚Äì 9th, Brisbane, 2010. Our research is focused on developing an ecoinformatics platform to support climate change adaptation in Victoria. A multidisciplinary, cross-organisational approach is taken in developing adaptation strategies to deal with the ‘diabolical’ policy problem of climate change. The platform comprises a number of components including: (i) a metadata discovery tool to support modelling, (ii) a workflow engine for connecting climate change models, (iii) geographical visualisation tools for communicating landscape and farm impacts, (iv) a landscape object library for storing and sharing digital models, and (v) a landscape constructor tool to support participatory decision-making, and (vi) a virtual organisation for collaboration and sharing information. In this paper we will discuss the platform as it has been developed to support collaborative research and to inform stakeholders of the likely impacts of climate change in South West Victoria, Australia. We will discuss some of the drivers for research in developing the ecoinformatics platform and its components. The paper concludes by identifying some future research directions in better connecting researchers and communicating science outcomes associated with climate change impact and adaptation. | | 2010 |  |
69 | Nunez, S., Bethwaite, B., Brenes, J., Barrantesz, G., Castro, J., Malavassiz, E. and Abramson, D. | NG-TEPHRA: A Massively Parallel, Nimrod/G-enabled Volcanic Simulation in the Grid and the Cloud Volcanoes are a principal factor of hazard across the Pacific Rim, with their focus of interest mostly divided into pyroclastic flows and ash deposition. The latter has significantly more impact due to its widespread geographical reach and prolonged effects in human activities and health. TEPHRA is a volcanic ash dispersion model based on a simple version of the advection-diffusion Suzuki model, which has been revisited and modified for the Iraz ´u volcano in Costa Rica. A full parameter exploration is necessary in this particular case (albeit not sufficient) due to scarce observational data. We present in this paper the model, its assumptions and limitations as well as application lifecycle with resulting ash distribution graphics. The computational experimental settings are described, in particular the use of Nimrod/G with respect to non-homogeneous parameter sweeps and its impact on execution time. We also analyze the implementation of a new parameter discard mechanism common to e-Science experiments where sequential generation of new parameter sets has to be complemented with an early verification in order to avoid allocation of CPU time to non-valid scenarios. Finally four sample 100K-scenario runs are analyzed for both traditional HPC clustering and Cloud computing resources in the Amazon EC2 Cloud. | IEEE e-Science 2010, December 6th ‚Äì 9th, Brisbane, 2010. | 2010 |  |
70 | Sher, A., Wang, K., Wathen, A., Abramson, D. and Gavaghan., D. | A Local Sensitivity Analysis Method for Developing Cardiac Models with Identifiable Parameters: Application to L-type Calcium Channels Regulation Computational cardiac models provide important insights into the underlying mechanisms of heart function. Parameter estimation in these models is an ongoing challenge with many existing models being overparameterised. Sensitivity analysis presents a key tool for exploring the parameter identifiability. While existing methods provide insight into the significance of the parameters, they are unable to identify redundant parameters in an efficient manner. We present a new singular value decomposition based algorithm for determining parameter identifiability in cardiac models. Using this local sensitivity approach, we investigate the Mahajan 2008 rabbit ventricular myocyte L-type calcium current model. We identify non-significant and redundant parameters and improve the Ical model by reducing it to a minimum one that is validated to have only identifiable parameters. The newly proposed approach provides a new method for model validation and evaluation of the predictive power of cardiac models. | IEEE e-Science 2010, December 6th – 9th, Brisbane, 2010. | 2010 |  |
71 | Enticott, C., Peachey, T., Abramson, D., Gavaghan, D., Bond, A., Elton, D., Mashkina, E., Lee, C., Kennedy, G. | Electrochemical Parameter Optimization using Scientific Workflows Modern chemistry involves a mix of experimentation and computer-supported theory. Historically, these skills have been provided by different groups, and range from traditional “wet” laboratory science to advanced numerical simulation. This paper discusses the application of advanced e-Science software tools to electrochemistry research and involves collaboration between laboratories at Monash and Oxford Universities. In particular, we show how the Nimrod/OK tool can be used to automate the estimation of electrochemical parameters in Fourier transformed voltammetry. Replacing an ad-hoc manual process with e-Science tools both accelerates the process and produces more accurate solutions. In this work much attention was given to the role of the scientist user. During the research, Nimrod/K (on which Nimrod/OK is built) was extended to shield that user from technical details of the grid infrastructure; a new system of file management and grid abstraction has been incorporated. | IEEE e-Science 2010, December 6th ‚Äì 9th, Brisbane, 2010. | 2010 |  |
72 | Sher, A., Cooling, M., Bethwaite, B., Tan, J., Peachey, T., Enticott, C., Garic, S., Gavaghan, D., Noble, D., Abramson, A., and Crampin, E. | A Global Sensitivity Tool For Cardiac Cell Model: Ionic Currents Balance and Hypertrophic IP3 Transients in Atrial and Ventricular Cell Models Cardiovascular diseases are the major cause of death in the developed countries. Identifying key cellular processes involved in generation of the electrical signal and in regulation of signal traduction pathways is essential for unravelling the underlying mechanisms of heart rythym behaviour. Computational cardiac models provide important insights into cardiovascular function and disease.Sensitivity analysis presents a key tool for exploring the large parameter space of such models, in order to determine the key factors determining and controlling the underlying physiological processes.We developed a new global sensitivity analysis tool which implements the Morris method, a global sensitivity screening algorithm, onto a Nimrod platform, which is a distributed resources software toolkit. The newly developed tool has been validated using the model of IP3-calcineurinsignal traduction pathway model which has 30 parameters.The key driving factors of the IP3 transient behaviour have been calculated and confirmed to agree with previously published data. We next demonstrated the use of this method as an assessment tool for characterizing the structure of cardiac ionic models. In three latest human ventricular myocyte models, we examined the contribution of transmembrane currents to the shape of the electrical signal(i.e. on the action potential duration). The resulting profiles of the ionic current balance demonstrated the highly nonlinear nature of cardiac ionic models and identified key players in different models. Such profiling suggests new avenues for development of methodologies to predict drug action effects in cardiac cells. | 32nd Annual International IEEE EMBS Conference of the IEEE Engineering in Medicine and Biology Society in Buenos Aires Sheraton Hotel, Buenos Aires, Argentina during August 31 – September 4, 2010. | 2010 |  |
73 | Abramson, D., Dinh, M.N., Kurniawan, D., Moench, B. and DeRose, L. | Data Centric HIghly Parallel Debugging Debugging parallel programs is an order of magnitude more complex than sequential ones, and yet, most parallel debuggers provide little extra functionality than their sequential counterparts. This problem becomes more serious as computational codes become more complex, involving larger data structures, and as the machines become larger. Peta-scale machines consisting of millions of cores pose a significant challenge for existing techniques. We argue that debugging must become more data-centric, and believe that “assertions” provide a useful model. Assertions allow a user to declare their expectations about the program state as a whole rather than focusing on that of only a single process state. Previously, we have implemented a special type of assertion that supports debugging applications as they evolve or are ported to different platforms. They allow a user to compare the state of one program against another reference version. These ‘relative debugging’ assertions, whilst powerful, pose significant implementation challenges for large peta-scale machines. In this paper we discuss a hashing technique that provides a scalable solution for very large problems on very large machines. We illustrate the scheme on 65k cores of Kraken, a Cray XT5 at the University of Tennessee. | 2010 International Symposium on High Performance Distributed Computing (HPDC 2010), Chicago, USA, June 2010 | 2010 |  |
74 | Russel, A.B.M., Abramson, D., Bethwaite, B., Dinh, M., Enticott, C., Firth, S., Garic, S., Harper, I., Lackmann, M., Schek, S. and Vail, M. | An Abstract Virtual Instrument System for High Throughput Automatic Microscopy Modern biomedical therapies often involve disease specific drug development and may require observing cells at a very high resolution. Existing commercial microscopes behave very much like their traditional counterparts, where a user controls the microscope and chooses the areas of interest manually on a given specimen scan. This mode of discovery is suited to problems where it is easy for a user to draw a conclusion from observations by finding a small number of areas that might require further investigation. However, observations by an expert can be very time consuming and error prone when there are a large number of potential areas of interest (such as cells or vessels in a tumour), and compute intensive image processing is required to analyse them. In this paper, we propose an Abstract Virtual Instrument (AVI) system for accelerating scientific discovery. An AVI system is a novel software architecture for building an hierarchical scientific instrument – one in which a virtual instrument could be defined in terms of other physical instruments, and in which significant processing is required in producing the illusion of a single virtual scientific discovery instrument. We show that an AVI can be implemented using existing scientific workflow tools that both control the microscope and perform image analysis operations. The resulting solution is a flexible and powerful system for performing dynamic high throughput automatic microscopy. We illustrate the system using a case study that involves searching for blood vessels in an optical tissue scan, and automatically instructing the microscope to rescan these vessels at higher resolution. | Procedia Computer Science, Volume 1, Issue 1, ICCS (International Conference on Computational Science) 2010, May 2010, Pages 545-554, ISSN 1877-0509, DOI: 10.1016/j.procs.2010.04.058. | 2010 |  |
75 | Abramson,D., Bethwaite, B., Enticott, C., Garic,S., Peachey, T., Michailova,A., Amirriazi, S. | Embedding optimization in computational science workflows Workflows support the automation of scientific processes, providing mechanisms that underpin modern computational science. They facilitate access to remote instruments, databases and parallel and distributed computers. Importantly, they allow software pipelines that perform multiple complex simulations leveraging distributed platforms), with one simulation driving another. Such an environment is ideal for computational science experiments that require the evaluation of a range of different scenarios in silico in an attempt to find ones that optimize a particular outcome. However, in general, existing workflow tools do not incorporate optimization algorithms, and thus whilst users can specify simulation pipelines, they need to invoke the workflow as a stand-alone computation within an external optimization tool. Moreover, many existing workflow engines do not leverage parallel and distributed computers, making them unsuitable for executing computational science simulations. To solve this problem, we have developed a methodology for integrating optimization algorithms directly into workflows. We implement a range of generic actors for an existing workflow system called Kepler, and discuss how they can be combined in flexible ways to support various different design strategies. We illustrate the system by applying it to an existing bio-engineering design problem running on a Grid of distributed clusters. | Elsevier, Journal of Computational Science 1, 2010, pp 41-47 | 2010 |  |
76 | Schmidberger, J., Bate, M., Reboul, C., Androulakis, S., Phan, J., Whisstock, J., Goscinski, W., Abramson, D. and Buckle, A. | MrGrid: A Portable Grid Based Molecular Replacement Pipeline Tooltip content | PLoS ONE, April 2010, Volume 5, Issue 4. | 2010 |  |
77 | Abramson, D., Bernabeu, M., Bethwaite, B., Burrage, K., Corrias, A., Enticott, C., Garic, S., Gavaghan, D., Peachey, T., Pitt-Francis, J., Pueyo, E., Rodriguez, B., Sher, A. and Tan, J. | High Throughput Cardiac Science on the Grid Cardiac electrophysiology is a mature discipline, with the first model of a cardiac cell action potential having been developed in 1962. Current models range from single ion channels, through very complex models of individual cardiac cells, to geometrically and anatomically detailed models of the electrical activity in whole ventricles. A critical issue for model developers is how to choose parameters that allow the model to faithfully reproduce observed physiological effects without over-fitting. In this paper, we discuss the use of a parametric modelling toolkit, called IMROD, that makes it possible both to explore model behaviour as parameters are changed and also to tune parameters by optimizing model output. Importantly, NIMROD leverages computers on the Grid, accelerating experiments by using available high-performance platforms. We illustrate the use of NIMROD with two case studies, one at the cardiac tissue level and one at the cellular level. | Special issue of Transactions of the Royal Society. Phil. Trans. R. Soc. A 2010 368, 3907-3923. | 2010 |  |
78 | Peachey, T. C., Abramson D., and Lewis A. | Parallel line search We consider a parallel implementation of the algorithm for line search by repeated subdivision. It is shown that the finer subdivisions allowed by concurrent function evaluations may produce slower execution in some circumstances. We present a rule for guiding the choice of the number of steps in subdivisions. We also consider a heuristic for speeding convergence by aborting function evaluations in some cases. | Optimization: Structure and Applications, Springer Optimization and Its Applications , Vol. 32, Pearce, C.; Hunt, E.(Eds.), Chapter 20, pp 369 – 381, ISBN: 978-0-387-98095-9 | 2009 |  |
79 | Bernabeu, M. O., Corrias, A, Pitt-Francis, J, Rodriguez, B, Bethwaite, B, Enticott, C, Garic, S, Peachey, T, Tan, J, Abramson, D, and Gavaghan, D B. | Grid computing simulations of ion channel block effects on the ECG using 3D anatomically-based models In this work, Grid computing technology is combined with state-of-the-art cardiac simulation software and 3D ventricular models to provide the computational framework required to investigate changes in the electrocardiogram (ECG) caused by ion channel block. The technological challenges addressed through this work are highly interdisciplinary including numerical, computational, modelling and electrophysiological aspects.The forward problem of electrocardiography in complex cardiac geometries is challenging in terms of biological complexity and computational tractability. Here we employ a multi-scale modelling approach to the problem, which includes representation from the ion channel to the ECG level. The Chaste software was used to simulate propagation of the AP throughout the ventricles using the bidomain model. Chaste was coupled to the Nimrod toolkit in order to (1) identify the computational resources available in the Grid, (2) set up a parameter sweep experiment and (3) schedule them appropriately according to the resources available.Our application is to enable the investigation of the impact of the block of the HERG current on the ECG waveform using a 3D ventricular model of the heart immersed in a control volume. The software pipeline developed enables (1) automated parameter sweep using multiscale models (from ion channel to ECG) and (2) reduced execution time of the simulations performed. Our results show how the QT interval in the ECG signal recorded at a node in the medium surrounding the heart increases as the K+ current is gradually blocked.In this work we have successfully integrated a cardiac simulator with a Grid toolkit, and we have applied the newly created software pipeline for the simulation of multiscale processes in cardiac electrophysiology. We believe this work will set the path for future high computing-demand studies on how cellular processes affect the overall cardiac mechanism. | Computers and Cardiology, Park City, Utah, September 13-16, 2009. | 2009 |  |
80 | Tan, J. and Abramson, D. | Optimizing Tunneled Grid Connectivity across Firewalls Grids today generally assume that concurrent network connections are possible among many processors attached to high-capacity networks. However, inter-network boundaries dividing independent institutions often have firewalls, typically to restrict how many and which ports are accessible. In some cases, ports are opened indefinitely for Grid applications, but this compromises security significantly. On the other hand, solutions that manage port openings in an ad-hoc manner for applications are non-trivial to implement. An alternative firewall traversal technique is required that will provide manageable openings with less complexity involved. This is possible through proxies and managed tunnels using ports already authorized across the firewalls. We have developed a transparent connectivity mechanism for this, called Remus, which reroutes Grid connections through a tunnel on ports allowed across firewalls. However, a single tunnel presents a performance bottleneck. In this paper, we present the method by which Remus distributes several connections over multiple tunnels, improving throughput as a result. Rerouting wrappers hide the tunneling from applications, intercepting outgoing connections and rerouting them transparently. Wellknown and mature tools and protocols, such as SSH and/or SOCKS, are utilized, instead of imposing customized, non-standard mechanisms. Results of our experiments are also presented for large file transfers over a Globus-based Grid that uses Remus. | . 7th Australasian Symposium on Grid Computing and e-Research (AUSGRID 2009), Vol. 99. W. Kelly and P. Roe, Eds, Wellington, NZ, Jan 20th – 23rd, 2009. | 2009 |  |
81 | Schmidberger, J., Bethwaite, B., Enticott, C., Bate, M., Androulakis, S., Faux, N., Reboul, C., Phan, J., Whisstock, J., Garic, S., Goscinski, W., Abramson, D., and Buckle, A. | High-throughput Protein Structure Determination Using Grid Computing Determining the X-ray crystallographic structures of proteins using the technique of molecular replacement (MR) can be a time and labor-intensive trial-and-error process,involving evaluating tens to hundreds of possible solutions to this complex 3D jigsaw puzzle. For challenging cases indicators of success often do not appear until the later stages of structure refinement, meaning that weeks or even months could be wasted evaluating MR solutions that resist refinement and do not lead to a final structure. In order to improve the chances of success as well as decrease this timeframe, we have developed a novel grid computing approach that performs many MR calculations in parallel, speeding up the process of structure determination from weeks to hours. This high-throughput approach also allows parameter sweeps to be performed in parallel, improving the chances of MR success. | IEEE Workshop on High Performance Computational Biology , May 25, 2009, Rome, Italy, co-located with IPDPS 2009. | 2009 |  |
82 | Chan, P and Abramson, D. | Persistence and Communication State Transfer in an Asynchronous Pipe Mechanism Wide-area distributed systems offer new opportunities for executing large-scale scientific applications. On these systems, communication mechanisms have to deal with dynamic resource availability and the potential for resource and network failures. Connectivity losses can affect the execution of workflow applications, which require reliable data transport between components. We present the design and implementation of p-channels, an asynchronous and fault-tolerant pipe mechanism suitable for coupling workflow components. Fault-tolerant communication is made possible by persistence, through adaptive caching of pipe segments while providing direct data streaming. We present the distributed algorithm for implementing: (a) caching of pipe data segments; (b) asynchronous read operation; and (c) communication state transfer to handle dynamic process joins and leaves. | International Journal of Grid and High Performance Computing, Vol. 1, Issue 3, 2009, Pages: 18-36. | 2009 |  |
83 | Abramson, D, Bethwaite, B, Dinh, M, Enticott, C, Firth, S, Garic, S, Harper, I, Lackmann, M, Nguyen, H, Ramdas, T, Russel, A.B.M, Schek, S and Vail, M. | Virtual Microscopy and Analysis using Scientific Workflows Most commercial microscopes are stand-alone instruments, controlled by dedicated computer systems. These provide limited storage and processing capabilities. Virtual microscopes, on the other hand, link the image capturing hardware and data analysis software into a wide area network of high performance computers, large storage devices and software systems. In this paper we discuss extensions to Grid workflow engines that allow them to execute scientific experiments on virtual microscopes. We demonstrate the utility of such a system in a biomedical case study concerning the imaging of cancer and antibody based therapeutics. | IEEE e-Science 2009, Dec 9 – 11th, Oxford, UK. | 2009 |  |
84 | Abramson, D., Bethwaite, B., Enticott, C., Garic, S., Peachey, T., Michailova, A., Amirriazi, S., Chitters, R. | Robust Workflows for Science and Engineering Scientific workflow tools allow users to specify complex computational experiments and provide a good framework for robust science and engineering. Workflows consist of pipelines of tasks that can be used to explore the behaviour of some system, involving computations that are either performed locally or on remote computers. Robust scientific methods require the exploration of the parameter space of a system (some of which can be run in parallel on distributed resources), and may involve complete state space exploration, experimental design or numerical optimization techniques. Whilst workflow engines provide an overall framework, they have not been developed with these concepts in mind, and in general, don't provide the necessary components to implement robust workflows. In this paper we discuss Nimrod/K - a set of add in components and a new run time machine for a general workflow engine, Kepler. Nimrod/K provides an execution architecture based on the tagged dataflow concepts developed in 1980's for highly parallel machines. This is embodied in a new Kepler 'Director' that orchestrates the execution on clusters, Grids and Clouds using many-task computing. Nimrod/K also provides a set of 'Actors' that facilitate the various modes of parameter exploration discussed above. We demonstrate the power of Nimrod/K to solve real problems in cardiac science. | . 2nd Workshop on Many-Task Computing on Grids and Supercomputers(MTAGS 2009), Portland, USA, 16 November 2009 | 2009 |  |
85 | Abramson D., Bethwaite B., Enticott C., Garic S., Peachey T. | Parameter Space Exploration using Scientific Workflows In recent years there has been interest in performing parameter space exploration across “scientific workflows”, however, many existing workflow tools are not well suited to this. In this paper we augment existing systems with a small set of special “actors” that implement the parameter estimation logic. Specifically, we discuss a set of new Kepler actors that support both complete and partial sweeps based on experimental design techniques. When combined with a novel parallel execution mechanism, we are able to execute parallel sweeps and searches across workflows that run on distributed “Grid” infrastructure. We illustrate our new system with a case study in cardiac cell modelling. | ICCS 2009, Baton Rouge, Louisiana, May 2009 | 2009 |  |
86 | Tan, J., Abramson, D. and Enticott, C. | A Virtual Connectivity Layer for Grids Tooltip content | IEEE e-Science 2009, Dec 9 – 11th, Oxford, UK. | 2009 |  |
87 | Smanchat, S., Indrawan, M., Ling S., Enticott, C. and Abramson, D. | Scheduling Multiple Parameter Sweep Workflow Instances on the Grid Due to its ability to provide high-performance computing environment, grid has become an important infrastructure to support eScience. To utilise the grid for parameter sweep experiments, workflow technology combined with tools such as Nimrod/K are used to orchestrate and automate scientific services provided on the grid. As parameter sweep workflow needs to be executed numerous times, it is more efficient to execute multiple instances of parameter sweep workflow in parallel. However, this parallel execution can be delayed as every workflow instance requires the same set of resources leading to resource competition problem. Although many algorithms exist for scheduling grid workflow, there is little effort in considering multiple workflow instances and resource competition in the scheduling process. In this paper, we proposed a scheduling algorithm for parameter sweep workflow based on resource competition. The proposed algorithm aims to support multiple workflow instances and avoid allocating resources with high resource competition to minimise delay due to the blocking of tasks. The result is evaluated using simulation to compare with the existing scheduling algorithm. | IEEE e-Science 2009, Dec 9 – 11th, Oxford, UK. | 2009 |  |
88 | Pettit, C.J., Bishop, I.D., Borda, A., Uotila, P., Sposito, V.J., Raybould, L. and Russel, A.B.M. | An e-Science Approach to Climate Change Adaptation According to the Intergovernmental Panel on Climate Change’s 4th assessment report (IPCC 2007), warming of the climate system is ‘unequivocal’ as indicated through a number of earth observations including temperature, melting snow and sea level rise. With climate change eminent there is a need to explore both adaptation and mitigation strategies. This paper focuses on the former, but adaptation strategies might also inform climate change mitigation strategies. Specifically, the aim of this research is to test a new approach to cross-organisational research collaboration to support policy makers, planners, and land managers in addressing climate change adaptation in south-west Victoria, Australia. The approach is based on the concept of virtual collaboration also known as e-Science. It comprises several core components including: (i) climate change data and models, (ii) land suitability analysis, (iii) geographical visualisation tools, and (iv) Virtual Organisation (VO) platform. A prototype e-Science VO platform known as the e-Resource Centre (e-RC) has been built for the study region to support data sharing, modelling and visualisation of climate change adaptation options for 2050. This paper reports on the current research in applying an e-Science approach, known as ecoinformatics, to enable better collaboration across organisations in addressing climate change adaptation. The next stage of the research is to evaluate the various components comprising the ecoinformatics platform, particularly from the point of view of end users. | Spatial Sciences Institute Biennial International Conference (SSC 2009): Place & Purpose Symposia, Adelaide, South Australia, 2009. | 2009 |  |
89 | Abramson, D.A.,Chu, C.,Kurniawan, D., Searle, A. | Relative Debugging in an integrated development environment Relative Debugging allows a user to compare the internal state of two programs as they run, making it possible to test whether two programs perform the same function given the same output. When implemented with a command line interface, a relative debugger looks like traditional debugger tools with the addition of commands that describe which structures should be equivalent in the two programs. In this paper, we discuss relative debugging within an integrated development environment, and show that there are significant advantages over a command line form. We describe a pluggable, modular, architecture that works with a variety of different products, including Microsoft's Visual Studio, SUN's NetBeans and IBM's Eclipse. | Software – Practice and Experience, 2009:pp 1157-1183; John Wiley&Sons Ltd. | 2009 |  |
90 | Ramdas, T., Egan, G. K., Abramson, D. A., & Baldridge, K. K. | ÄòERI Sorting for Emerging Processor Architectures Electron Repulsion Integrals (ERIs) are a common bottleneck in ab initio computational chemistry. It is known that sorted/reordered execution of ERIs results in efficient SIMD/vector processing. This paper shows that reconfigurable computing and heterogeneous processor architectures can also benefit from a deliberate ordering of ERI tasks. However, realizing these benefits as net speedup requires a very rapid sorting mechanism. This paper presents two such mechanisms. Included in this study are analytical, simulation-based, and experimental benchmarking approaches to consider five use cases for ERI sorting, i.e. SIMD processing, reconfigurable computing, limited address spaces, instruction cache exploitation, and data cache exploitation. Specific consideration is given to existing cache-based processors, FPGAs, and the Cell Broadband Engine processor. It is proposed that the analyses conducted in this work should be built upon to aid the development of software autotuners which will produce efficient ab initio computational chemistry codes for a variety of computer architectures. | Computer Physics Communications, Elsevier, 2009 | 2009 |  |
91 | Ho, T and Abramson, T. | An Active Data Model The Grid allows scientists to design and perform very large-scale experiments that are not possible previously. Many of these experiments produce a large number of data sets and significantly the amount of data being captured,generated, replicated and archived is growing at an astonishing rate. We have designed and prototyped an Active Data System that provides a complete,automated solution to problems that arise in the management and curation of derived data. Significantly, the system allows users to recompute data rather than necessarily store it, and adds a layer that provides efficient access to remotely distributed replicated sources across different middleware stacks. | High Performance & Large Scale Computing, W. Gentzsch, L. Grandinetti, G. Joubert (Editors), IOS Press, Amsterdam, The Netherlands, Advances in Parallel Computing, 2009. | 2009 |  |
92 | Dimmock M. R., J. E. Gillam, T. E. Beveridge, J.M.C. Brown, R. A. Lewis and C.J. Hall | The Nimrod/G Grid Resource Broker for Economic-based Scheduling We discuss the design, development and experimental evaluation of the Nimrod/G resource broker that supports deadline and budget constrained and quality of service requirements - driven application scheduling on worldwide distributed resources. The broker is able to dynamically adapt itself when there is a change in the availability of resources and user QoS requirements during the application execution. It also supports scalable, controllable,, measurable, and easily enforceable policies and scheduling algorithms for allocation of resources to user applications. It demonstrates that the computational economy approach for Grid computing provides an effective means for pushing Grids into mainstream computing and enables the creation of a worldwide Grid marketplace. The Nimrod tools for modelling parametric experiments are mature and in production use for cluster and Grid computing. The NImrod/G task-farming engine (TFE) services have been used in developing customised clients and applications. An associated dispatcher is capable of deploying computations (jobs) on Grid resources enabled by Globus, Legion, and Condor The TFE jobs management protocols and services can be used for developing new scheduling policies. We have built a number of market-driven deadline- and budget-constrained scheduling algorithms, namely, time and cost optimizations with deadline and budget constraints. The results of scheduling experiments with different QoS requirements on the World Wide Grid resource show promising insights into the effectiveness of an economic paradigm for management of resources and their usefulness in application scheduling with optimizations. The results demonstrate that the users have options and can, indeed, trade off between the deadline and the budget depending on their requirements, thus encouraging them to reveal their true requirements to increase the value delivered by the utility. | in Market-Oriented Grid and Utility Computing (Wiley Series on Parallel and Distributed Computing), Editors Rajkumar Buyya and Kris Bubendorfer, ISBN-13: 978-0470287682, November 2009. | 2009 |  |
94 | Goscinski, W. and Abramson, D. | An Infrastructure for the Deployment of e-Science Applications Recent developments in grid middleware and infrastructure have made it possible for a new generation of scientists, e-Scientists, to envisage and design large-scale computational experiments. However, while scheduling and execution of these experiments has become common, developing, deploying and maintaining application software across a large distributed grid remains a difficult and time consuming task. Without simple application deployment, the potential of grids cannot be realized by grid users. In response, this paper presents the motivation, design, development and demonstration of a framework for grid application deployment. Using this framework, e-Scientists can develop platform-independent parallel applications, characterise and identify suitable computational resources and deploy applications easily. | in | 2008 |  |
95 | Chan, P. and Abramson, D. | A Programming Framework for Incremental Data Distribution in Iterative Applications. Successful HPC over desktop grids and nondedicated NOWs is challenging, since good performance is difficult to achieve due to dynamic workloads. On iterative data-parallel applications, this is addressed by dynamic data distribution. However, current approaches migrate an application from one distribution to another in one single phase, which can impact performance. In this paper, we present D3-ARC, a programming framework to support adaptive and incremental data distribution, so that data migration takes place over several successive iterations. D3-ARC consists of a runtime system and an API for specifying the distribution of arrays as well as how data redistribution takes place.We demonstrate how D3-ARC can be used to develop an incremental strategy for data distribution in a Poisson solver, utilising a runtime feedback mechanism to determine how much data to migrate during each iteration. | In: Proc. of the 2008 IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA-2008). 10-12 December 2008. Sydney, Australia. pp 244 – 251, IEEE Press ISBN 978-0-7695-3471-8. | 2008 |  |
96 | Ramdas, T., Abramson, D., Egan, G. & Baldridge, K. | On ERI Sorting for SIMD Execution of Large-Scale Hartree-Fock SCF Given the resurgent attractiveness of single-instruction-multiple-data (SIMD) processing, it is important for high-performance computing applications to be SIMD-capable. The Hartree–Fock SCF (HF-SCF) application, in it's canonical form, cannot fully exploit SIMD processing. Prior attempts to implement Electron Repulsion Integral (ERI) sorting functionality to essentially “SIMD-ify” the HF-SCF application have met frustration because of the low throughput of the sorting functionality. With greater awareness of computer architecture, we discuss how the sorting functionality may be practically implemented to provide high-performance. Overall system performance analysis, including memory locality analysis, is also conducted, and further emphasises that a system with ERI sorting is capable of very high throughput. We discuss two alternative implementation options, with one immediately accessible software-based option discussed in detail. The impact of workload characteristics on expected performance is also discussed, and it is found that in general as basis set size increases the potential performance of the system also increases. Consideration is given to conventional CPUs, GPUs, FPGAs, and the Cell Broadband Engine architecture. | Comp. Phys. Commun., doi:10.1016/j.cpc.2008.01.045, 2008. | 2008 |  |
97 | Abramson, D. A., Enticott, C., Peachey,T. | [4] exposes a variety of middleware layers, from Globus through to ad-hoc interfaces like SSH. Other engines, like Triana and Taverna allow users to invoke services as Web Services, but provide no explicit support for Grid middleware. In spite of its significant power, Kepler, and many other current workflow systems, do not support dynamic parallel execution of the workflow and its components. This means that users must explicitly code a workflow to cause it to execute elements in parallel. This significantly complicates the workflow and obscures the underlying business logic. ” position=”top” background=”#FFFFFF” color=”#000000″ font_size=”12″ text_align=”left” max_width=”500″ radius=”5″ shadow=”yes” behavior=”hover” class=””] Parameter Estimation using Scientific workflows ””” | Poster for Fourth IEEE International Conference on eScience, Indianopolis, 7-12 December 2008 | 2008 |  |
98 | Ayyub, S., Abramson, D., Enticott, C., Garic, S., Tan, J. | Fault-tolerant execution of large parameter sweep applications across multiple VOs with storage constraints Applications that span multiple virtual organizations (VOs) are of great interest to the e-science community. However, our recent attempts to execute large-scale parameter sweep applications (PSAs) for real-world climate studies with the Nimrod/G tool have exposed problems in the areas of fault tolerance, data storage and trust management. In response, we have implemented a task-splitting approach that facilitates breaking up large PSAs into a sequence of dependent subtasks, improving fault tolerance; provides a garbage collection technique that deletes unnecessary data; and employs a trust delegation technique that facilitates flexible third party data transfers across different VOs | Concurrency and Computation: Practice and Experience. Currently Online, 25/8/2008, | 2008 |  |
99 | Kurniawan, D and Abramson, D. | An IDE Framework for Grid Application Development Grid computing enables the aggregation of a large number of computational resources for solving complex scientific and engineering problems.However, writing, deploying, and testing grid applications over highly heterogeneous and distributed infrastructure are complex and error prone. A number of grid integrated development environments (IDEs)have been proposed and implemented to simplify grid application development. This paper presents an extension to our previous work on a grid IDE in the form of a software framework with a well-defined API and an event mechanism. It provides novel tools to automate routine grid programming tasks and allow programmable actions to be invoked based on certain events. Its system model regards resources as firstclass objects in the IDE and allows tight integration between the execution platforms and the code development process. We discuss how the framework improves the process of grid application development | Grid 2008, Sept 29th – Oct 1st, Tsukuba, Japan. | 2008 |  |
100 | Bethwaite, B., Abramson, D., Buckle, A | Grid Interoperability: An Experiment in Bridging Grid Islands In the past decade Grid computing has matured considerably. A number of groups have built, operated, and expanded large testbed and production Grids. These Grids have inevitably been designed to meet the needs of a limited set of initial stakeholders, resulting in varying and sometimes ad-hoc specifications. As the use of e-Science becomes more common, this inconsistency is increasingly problematic for the growing set of applications requiring more resources than a single Grid can offer, as spanning these Grid islands is far from trivial. Thus, Grid interoperability is attracting much interest as researchers try to build bridges between separate Grids. Recently we ran a case study that tested interoperation between several Grids, during which we recorded and classified the issues that arose. In this paper we provide empirical evidence supporting existing interoperability efforts, and identify current and potential barriers to Grid interoperability. | PRAGMA Workshop at e-Science, 2008, 4th IEEE International Conference on e-Science, Indiana, Dec 8th – 12th, 2008 | 2008 |  |
101 | Sher, A., Abramson, D, Enticott, C, Garic, S, Gavaghan, D., Noble, D., Noble, P., Peachey, T. | Incorporating local Ca2+ dynamics into single cell ventricular models using Nimrod/O Understanding physiological mechanisms underlying the activity of the heart is of great medical importance. Mathematical modeling and numerical simulation have become a widely accepted method of unraveling the underlying mechanism of the heart. Calcium (Ca2+) dynamics regulate the excitation-contraction coupling in heart muscle cells and hence are among the key players in maintaining normal activity of the heart. Many existing ventricular single cell models lack the biophysically detailed description of the Ca2+ dynamics. In this paper we examine how we can improve existing ventricular cell models by replacing their description of Ca2+ dynamics with the local Ca2+ control models. When replacing the existing Ca2+ dynamics in a given cell model with a different Ca2+ description, the parameters of the Ca2+ subsystem need to be re-fitted. Moreover, the search through the plausible parameter space is computationally very intensive. Thus, the Grid enabled Nimrod/O software tools are used for optimizing the cell parameters. Nimrod/O provides a convenient, user-friendly framework for this as exemplified by the incorporation of local Ca2+ dynamics into the ventricular single cell Noble 1998 model. | ICCS 2008, Krakow, Poland, June 2008. | 2008 |  |
102 | Amirriazi S., Chang S., Peachey T., Abramson D. and Michailova A. | Biophysics 2008, Long Beach, California, February 2008. Optimizing Cardiac Excitation-Metabolic Model By Using Parallel Grid Computing | | 2008 |  |
103 | Pettit, C.J. and Russel, A.B.M. | A Spatial Decision Support System Framework For Climate Change Adaptation In Victoria Climate Change has been acknowledged as one of the single greatest threats to global social, environmental and economic well- being. Climate change poses significant threats to existing urban infrastructure, current water use practices and agricultural industries to name a few. In the State of Victoria in Australia, there is an e-Science initiative to mobilise cross-organisational expertise in order to underpin policy-making as it relates to climate change adaptation. An underpinning Spatial Decision Support System Framework has been designed to support integrated systems modelling across a number of modelling domains including climate, hydrological, crop and land suitability modelling. A number of core datasets serve as inputs in the spatial decision support system framework including land utilization, soil, geology, climate prediction (historical and forecasts) and many others. These datasets feed into a number of system models, which are used to better understand the present and future land suitability, hydrological processes, agricultural productivity and sea level changes. The climate change models align with the Intergovernmental Panel on Climate Change (IPCC) scenarios and have been downscaled to a regional level. In this research we present an e-Science based virtual collaborative workspace which has been developed to support the exchange of data, models, within a secure cross-organisational data grid. Also, we examine the use of a geographical visualisation as a front end to complex multi-disciplinary scenario modelling to enable end users to better understand the policy and on the ground implications of climate change to the Victorian landscape. | XXI Congress of the International Society for Photogrammetry and Remote Sensing (ISPRS), Beijing, China, July 2008, pp 515-521. | 2008 |  |
104 | Tan, J., Abramson, D. & Enticott,C. | REMUS: A Rerouting and Multiplexing System for Grid Connectivity Across Firewalls The Grid provides unique opportunities for high-performance computing through distributed applications that execute over multiple remote resources. Participating institutions can form a virtual organization to maximize the utilization of collective resources as well as to facilitate collaborative projects. However, there are two design aspects in distributed environments like the Grid that can easily clash: security and resource sharing. It may be that resources are secure but are not entirely conducive to resource sharing, or networks are wide open for resource sharing but sacrifice security as a result. We developed REMUS, a rerouting and multiplexing system that provides a compromise through connection rerouting and wrappers. REMUS reroutes connections using proxies, ports and protocols that are already authorized across firewalls, avoiding the need to make new openings through the firewalls. We also encapsulate applications within wrappers, transparently rerouting the connections among Grid applications without modifying their programs. In this paper, we describe REMUS and the tests we conducted across firewalls using two Grid middleware case studies: Globus Toolkit 2.4 and Nimrod/G 3.0. | Journal of Grid Computing, Springer Netherlands ISSN 1570-7873 (Print) 1572-9814 (Online),DOI 10.1007/s10723-008-9104-1, 28 June 2008 | 2008 |  |
105 | Abramson D, Enticott C, Altintas I. | Nimrod/K: Towards Massively Parallel Dynamic Grid Workflows A challenge for Grid computing is the difficulty in developing software that is parallel, distributed and highly dynamic. Whilst there have been many general purpose mechanisms developed over the years, Grid programming still remains a low level, error prone task. Scientific workflow engines can double as programming environments, and allow a user to compose ‘virtual’ Grid applications from pre-existing components. Whilst existing workflow can specify arbitrary parallel programs, (where components use message passing) they are typically not effective with large and variable parallelism. Here we discuss dynamic dataflow, originally developed for parallel tagged dataflow architectures (TDAs), and show that these can be used for implementing Grid workflows. TDAs spawn parallel threads dynamically without additional programming. We have added TDAs to Kepler, and show that the system can orchestrate workflows that have large amounts of variable parallelism. We demonstrate the system using case studies in chemistry and in cardiac modelling. | IEEE SuperComputing 2008, Austin, Texas November 2008 | 2008 |  |
106 | Ramdas, T., Abramson, D., Egan, G. & Baldridge, K. | Uniting extrinsic vectorization and shell structure for efficient SIMD evaluation of Electron Repulsion Integrals Future computer architectures are likely to feature greater reliance on single instruction multiple data (SIMD) processing for high throughput processing of data-intensive workloads. For algorithms that rely heavily on electron repulsion integrals (ERIs), exploitation of SIMD processing requires extrinsic vectorization, i.e. the sorting of ERIs into sets with equivalent class that may be computed with an identical instruction stream. Such sorting is incongruous with the commonly exploited shell structure whereby ERI are generated over shells such that initialization/bootstrap values may be reused, yielding significant savings in ERI evaluation time. In this work, we discuss how extrinsic vectorization may be unified with shell structure through the exploitation of memory access locality. | to appear, J. Chem. Phys. Vol 349/1-3 pp 147-157, doi:10.1016/j.chemphys.2008.02.038 | 2008 |  |
107 | Goscinski, W. and Abramson, D. | Parallel Programming on a High Performance Application-runtime High-performance application development remains challenging, particularly for scientists making the transition to a heterogeneous grid environment. In general areas of computing, virtual environments such as Java and .Net have proved to be successful in fostering application development, allowing users to target and compile to a single environment, rather than a range of platforms, instruction sets and libraries. However, existing runtime environments are focused on business and desktop computing and they do not support the necessary high-performance computing (HPC) abstractions required by e-Scientists. Our work is focused on developing an application-runtime that can support these services natively. The result is a new approach to the development of an application-runtime for HPC: the Motor system has been developed by integrating a high-performance communication library directly within a virtual machine. The Motor message passing library is integrated alongside and in cooperation with other runtime libraries and services while retaining a strong message passing performance. As a result, the application developer is provided with a common environment for HPC application development. This environment supports both procedural languages, such as C, and modern object-oriented languages, such as C#. This paper describes the unique Motor architecture, presents its implementation and demonstrates its performance and use. | Concurrency and Computation: Practice and Experience. June 2008. doi:10.1002/cpe.1325 | 2008 |  |
108 | Ramdas, T., Egan, G. K., Abramson, D., & Baldridge, K. K. | Run-time thread sorting to expose data-level parallelism We address the problem of data parallel processing for computational quantum chemistry (CQC). CQC is a computationally demanding tool to study the electronic structure of molecules. An important algorithmic component of these computations is the evaluation of Electron Repulsion Integrals (ERIs). A key problem with ERI evaluation is controlflow variation between different ERI evaluations, which can only be resolved at runtime. This causes the computation to be unsuitable for data parallel execution. However, it is observed that although there is variation between ERI evaluations, the variation is limited; in fact there are a limited number of ERI classes present within any given workload. Conceptually, it is possible to classify the ERIs into sizable sets, and execute these sets in a data parallel fashion. Practically, creating these sets is computationally expensive. We describe an architecture to perform this thread sorting, where high throughput is achieved with small associative and multiport memories. The performance of the prototype is evaluated with FPGA synthesis. We go on to envision other uses for thread sorting, in general-purpose manycore architectures. | In Proceedings of the International Conference on Application-Specific Systems, Architectures and Processors. (pp. 55-60). | 2008 |  |
109 | Peachey, T. C., Diamond, N. T., Abramson, D. A. Sudholt, W., Michailova, A. Amirriazi, S. | Fractional Factorial Design for Parameter Sweep Experiments using Nimrod/E The techniques of formal experimental design and analysis are powerful tools for scientists and engineers. However, these techniques are currently underused for experiments conducted with computer models. This has motivated the incorporation of experimental design functionality into the Nimrod tool chain. Nimrod has been extensively used for exploration of the response of models to their input parameters; the addition of experimental design tools will combine the efficiency of carefully designed experiments with the power of distributed execution. This paper describes the incorporation of one type of design, the fractional factorial design, and associated analysis tools, into the Nimrod framework. The result provides a convenient environment that automates the design of an experiment, the execution of the jobs on a computational grid and the return of results, and which assists in the interpretation of those results. Several case studies are included which demonstrate various aspects of this approach. | Journal of Scientific Programming, Volume 16, Numbers 2,3, 2008. | 2008 |  |
110 | Ayyub, S., Abramson, D. A. | Grid services and GriddLeS [2] web services. ” position=”top” background=”#FFFFFF” color=”#000000″ font_size=”12″ text_align=”left” max_width=”500″ radius=”5″ shadow=”yes” behavior=”hover” class=””] GridRod – A dynamic runtime scheduler for grid workflows ””” | Proceedings of the 2007 International Conference on Supercomputing, 17 June 2007 to 21 June 2007, The Association for Computing Machinery, New York NY USA, pp. 43-52. | 2007 |  |
111 | Faux N., Beitz A., Bate M., Amin A. A., Atkinson I., Enticott C., Mahmood K., Swift M., Treloar A., Abramson D., Whisstock J. C., and Buckle A. M. | eResearch Solutions for High Throughput Structural Biology Structural biology research places significant demands upon computing and informatics infrastructure. Protein production, crystallization and X-ray data collection require solutions to data management, annotation, target tracking and remote experiment monitoring. Structure elucidation is computationally demanding and requires user-friendly interfaces to high-performance computing resources. Here we discuss how these challenges are being met at the Protein Crystallography Unit at Monash University. Specifically, we have developed informatics solutions for each stage in the structural biology pipeline, from DNA cloning through to protein structure determination. This infrastructure will be pivotal for accelerating the process of structural discovery and will be of significant interest to other laboratories worldwide. | The 3rd IEEE International Conference on e-Science and Grid Computing, Bangalore, India, 10-13 December 2007, CS Press, pp 221-227. | 2007 |  |
112 | Kurniawan, D., Abramson, D. A. | An integrated grid development environment in Eclipse With the proliferation of Grid computing, potentially vast computational resources are available for solving complex problems in science and engineering. However, writing, deploying, and testing e-Science applications over highly heterogeneous and distributed infrastructure are complex and error prone. Further complicating matters, programmers may need to target a variety of different Grid middleware packages. This paper presents the design and implementation of Worqbench, an integrated, modular and middleware neutral framework for e-Science application development on the Grid. Worqbench can be incorporated into a number of existing Integrated Development Environments, further leveraging the advantages of such systems. We illustrate one such implementation in the Eclipse environment. | Proceedings of the Third IEEE International Conference on e-Science and Grid Computing, 10 December 2007 to 13 December 2007, IEEE Computer Society, Los Alamitos CA USA, pp. 491-498 | 2007 |  |
113 | Lynch, A. H., D. Abramson, K. J. Beringer, and P. Uotila | Influence of savanna fire on Australian monsoon season precipitation and circulation as simulated using a distributed computing environment Fires in the Australian savanna have been hypothesized to affect monsoon evolution, but the hypothesis is controversial and the effects have not been quantified. A distributed computing approach allows the development of a challenging experimental design that permits simultaneous variation of all fire attributes. The climate model simulations are distributed around multiple independent computer clusters in six countries, an approach that has potential for a range of other large simulation applications in the earth sciences. The experiment clarifies that savanna burning can shape the monsoon through two mechanisms. Boundary-layer circulation and large-scale convergence is intensified monotonically through increasing fire intensity and area burned. However, thresholds of fire timing and area are evident in the consequent influence on monsoon rainfall. In the optimal band of late, high intensity fires with a somewhat limited extent, it is possible for the wet season to be significantly enhanced. | Geophys. Res. Lett., 34, L20801, doi:10.1029/2007GL030879. | 2007 |  |
114 | Ho, T. and Abramson, D., | Active Data: Supporting the Grid Data Life Cycle Scientific applications often involve computation intensive workflows and may generate large amount of derived data. In this paper we consider a life cycle, which starts when the data is first generated, and tracks its progress through replication, distribution, deletion and possible re-computation. We describe the design and implementation of an infrastructure, called Active Data, which combines existing Grid middleware to support the scientific data lifecycle in a platform-neutral environment. | | 2007 |  |
115 | Chan, P. and Abramson, D. | Persistence and Communication State Transfer in an Asynchronous Pipe Mechanism Tooltip content | The 13th International Conference on Parallel and Distributed Systems (ICPADS 07), Hsinchu, Taiwan, December 5-7, 2007. | 2007 |  |
116 | Chan, P. and Abramson, D. | pi-spaces: Support for Decoupled Communication in Wide-Area Parallel Applications Wide-area distributed systems like computational grids are emergent infrastructures for high-performance parallel applications. On these systems, communication mechanisms have to deal with many issues, including: private networks, heterogeneity, dynamic resource availability, and transient link failures. To address this, we present pi-Spaces, a shared space abstraction of typed pipe objects. These objects, called pi-channels, are asynchronous pipes that combine streaming and persistence for efficient communication while supporting spatial and temporal decoupling. This feature allows pi-channels to be written even in the absence (or failure) of the reader. In this paper, we present the design of pi-Space runtime system and provide some throughput evaluation results with our experimental prototype. | 6th International Conference on Grid and Cooperative Computing (GCC2007), Urumchi, Xinjiang, China, August 16-18, 2007 | 2007 |  |
117 | Chan, P. and Abramson, D. | A Scalable and Efficient Prefix-Based Lookup Mechanism for Large-Scale Grids With the proliferation of Grid computing, a large number of computational resources are available for solving complex scientific and engineering problems. Nevertheless, it is non-trivial to write, deploy, and test Grid applications over heterogeneous and distributed resources. Further complicating matters, programmers may need to manually manage variations in source code due to resource heterogeneity. This paper presents an implementation of an integrated Grid development environment that leverages IBM’s Eclipse IDE and our application development framework, Worqbench. It provides novel tools to develop and debug Grid software. It regards resources as first-class objects in the IDE and allows tight integration between the test beds and the code development process. We discuss how the environment assists programmers in developing Grid applications. | The 3rd IEEE International Conference on e-Science and Grid Computing, Bangalore, India, 10-13 December 2007 | 2007 |  |
118 | Ramdas, T., Abramson, D., Egan, G. & Baldridge, K. | Converting massive TLP to DLP: a special-purpose processor for molecular orbital computations We propose an application specific processor for computational quantum chemistry. The kernel of interest is the computation of electron repulsion integrals (ERIs), which vary in control flow with different input data. This lack of uniformity limits the level of data-level parallelism (DLP) inherent in the application, thus apparently rendering a SIMD architecture unfeasible. All ERIs may be computed in parallel, therefore there is much thread-level parallelism (TLP). We observe that it is possible to match threads with certain characteristics in a manner that reveals significant DLP across multiple threads. Our thread matching and scheduling scheme effectively converts TLP to DLP, allowing SIMD processing which was previously unfeasible. We envision that this approach may expose DLP in other applications traditionally considered to be poor candidates for SIMD computation. | ACM International Conference on Computing Frontiers 2007, pp. 267-276, Ischia Italy, May 7 – 9 2007 | 2007 |  |
119 | Ayyub, S., Abramson, D., Enticott, C., Garic, S., Tan, J. | Executing Large Parameter Sweep Applications on a Multi-VO Testbed Applications that span multiple virtual organizations (VOs) are of great interest to the eScience community. However, recent attempts to execute large-scale parameter sweep applications (PSAs) with the Nimrod/G tool have exposed problems in the areas of fault tolerance, data storage and trust management. In response, we have implemented a task-splitting approach, which breaks up large PSAs into a sequence of dependent subtasks, improving fault tolerance; provides a garbage collection technique, which deletes unnecessary data; and employs a trust delegation technique that facilitates flexible third party data transfers across different VOs. | 7th IEEE International Symposium on Cluster Computing and the Grid, CCGrid 2007, Los Alamitos CA USA, pp 73 – 80 | 2007 |  |
120 | Zheng C, Katz M, Papadopoulos P, Abramson D, Ayyub S, Enticott C, Garic S, Goscinski W, Arzberger P, Lee B S, Phatanapherom S, Sriprayoonsakul S, Uthayopas P, Tanaka Y, Tanimura Y, Tatebe O. | Lessons Learned Through Driving Science Applications in the PRAGMA Grid. This paper describes the coordination, design and implementation of the PRAGMA Grid. Applications in genomics, quantum mechanics, climate simulation, organic chemistry and molecular simulation have driven the middleware requirements, and the PRAGMA Grid provides a mechanism for science and technology teams to collaborate, for grids to interoperate and for international users to share software beyond the essential, de facto standard Globus core. Several middleware tools developed by researchers within PRAGMA have been deployed in the PRAGMA grid and this has enabled significant insights, improvements and new collaborations to flourish. In this paper, we describe how human factors, resource availability and performance issues have affected the middleware, applications and the grid design. We also describe how middleware components in grid monitoring, grid accounting and grid file systems have dealt with some of the major characteristics of our grid. We also briefly describe a number of mechanisms that we have employed to make software easily available to PRAGMA and global grid communities. | Int. J. Web and Grid Services, Vol.3, No.3, pp 287- 312. 2007 | 2007 |  |
121 | Kurniawan, D. and Abramson, D. | A WSRF-Compliant Debugger for Grid Applications Grid computing allows the utilization of vast computational resources for solving complex scientific and engineering problems. However, development tools for Grid applications are not as mature as their traditional counterparts, especially in the area of debugging and testing. Debugging Grid applications typically requires a programmer to address non-trivial issues such as heterogeneity, job scheduling, hierarchical resources, and security. This paper presents the design and implementation of a Grid service debug architecture that is compliant with the Web Service Resource Framework standard. The debugger provides a library with a set of well defined debug APIs. | 21st IEEE International Parallel & Distributed Processing Symposium (IPDPS 2007) March 26-30, 2007, Long Beach, California USA | 2007 |  |
122 | Ramdas, T., Abramson, D., Egan, G. & Baldridge, K. | Towards a special-purpose massively parallel computer for ab initio quantum chemistry: What‚Äôs on the table, and how do we take it? We propose the development of a special- purpose computer for the Hartree–Fock method, which generally suffers quartic time scaling. We conduct a qualitative assessment of the various computational components, with a focus on electron repulsion integrals (ERI), and consequently map various architectural traits to the various computational components. A quantitative analysis of one component is also presented. We go on to mull over the idea of mixed precision arithmetic. These analyses will aid the practical development of a specialized high performance multi-architecture computer. | Theoretical Chemistry Accounts: Theory, Computation, and Modeling (Theoretica Chimica Acta), doi:10.1007/s00214-007-0306-6, Springer Berlin / Heidelberg, 1432-881X (Print) 1432-2234 (Online), April 28, 2007 | 2007 |  |
123 | Abramson, D. | Applications Development for the Computational Grid The Computational Grid has promised a great deal in support of innovative applications, particularly in science and engineering. However, developing applications for this highly distributed, and often faulty, infrastructure can be demanding. Often it can take as long to set up a computational experiment as it does to execute it. Clearly we need to be more efficient if the Grid is to deliver useful results to applications scientists and engineers. In this paper I will present a raft of upper middleware services and tools aimed at solving the software engineering challenges in building real applications. | The Eighth Asia Pacific Web Conference, Harbin, China, Invited Key Note Address, 16th – 18th Jan, 2006. Lecture Notes in Computer Science, Volume 3841 / 2006, pp. 1 – 12, ISSN: 0302-9743 | 2006 |  |
124 | Russel, A.B.M. and Khan, A.I. (2006). | Towards Dynamic Data Grid Framework for eResearch The scale at which scientific data is produced will undergo a massive change in the near future. Many sophisticated scientific discovery laboratories or the installation of sensor networks would produce a large amount of data. Research in protein crystallography for instance can produce hundreds of Terabytes of data from a single crystallography beamline. These data have to be saved for future use and made available for collaborative use by researchers. There is a need to develop a framework which can deal with storing such data volumes. This framework should also handle disparate data sources tightly integrated with the users’ applications and large data streams arising from instruments and sensors. This paper presents an initial study into the framework for servicing large dynamic data sets over a national grid for eResearch. | . In Proc. Fourth Australasian Symposium on Grid Computing and e-Research (AusGrid 2006), Hobart, Australia. CRPIT, 54. Buyya, R. and Ma, T., Eds. ACS. 9-16. | 2006 |  |
125 | Ho, T. and Abramson, D. | A Unified Data Grid Replication Framework Modern scientific experiments can generate large amounts of data, which may be replicated and distributed across multiple resources to improve application performance and fault tolerance. Whilst a number of different replica management systems exist, particular communities usually adopt a single system. This creates problems when an application program spans more than one community, because it may need to target more than one middleware layer. One solution to this problem is to build a more flexible data access layer above the specific replica middleware. In this paper, we discuss such an architecture, the Grid Replication Framework, which provides applications with an abstract interface to existing replica systems. Further, the framework’s flexible plug-in architecture makes it easy to support new middleware as it becomes available. | 2nd IEEE International Conference on e-Science and Grid Computing. Dec. 4- 6, 2006, Amsterdam, Netherlands | 2006 |  |
126 | Kommineni, J., Abramson, D. and Tan, J. | Communication over a Secured Heterogeneous Grid with the GriddLeS runtime environment Scientific workflows are a powerful programming technique for specifying complex computations using a number of otherwise independent components. When used in a Grid environment, it is possible to build powerful “virtual applications” across multiple distributed and heterogeneous resources. Whilst toolkits such as Globus virtualize many system attributes, and thus make it easier to span different organizations, inconsistent security policies and resource heterogeneity can limit the applicability of workflow techniques.In earlier work, we have described a novel run time environment, GriddLeS, that supports flexible communication patterns between workflow components. GriddLeS abstracts IO operations, such that applications are given the illusion of operating on a local file system, whilst in fact they send and receive data between components. In this paper, we describe how GriddLeS assists in resolving some of the issues that arise due to heterogeneity in security policies and system architectures in a Grid environment. We illustrate the solution using a real world scientific workflow for climate modeling, and demonstrate a system that spans multiple conflicting security domains with heterogeneous resources. | 2nd IEEE International Conference on e-Science and Grid Computing. Dec. 4- 6, 2006, Amsterdam, Netherlands. | 2006 |  |
127 | Goscinski, W. and Abramson, D. | Motor: A Virtual Machine for High Performance Computing High performance application development remains challenging, particularly for scientists making the transition to a Grid environment. In general areas of computing, virtual environments such as Java and .Net have proved successful in fostering application development. Unfortunately, these existing virtual environments do not provide the necessary high performance computing abstractions required by e-Scientists. In response, we propose and demonstrate a new approach to the development of a high performance virtual infrastructure: Motor is a virtual machine developed by integrating a high performance message passing library directly within a virtual infrastructure. Motor provides high performance application developers with a common runtime, garbage collection and system libraries, including high performance message passing, whilst retaining strong message passing performance. | IEEE HPDC 06, Paris 2006 | 2006 |  |
128 | Pillai, B., Premaratne, M., Abramson, D., Lee, K., Nirmalathas, A., Lim, C., Shinada, S., Wada, N., and Miyazaki, T. | Analytical Characterization of Optical Pulse Propagation in Polarization-Sensitive Semiconductor Optical Amplifiers In this paper, we perform analytical characterization of optical pulses propagating through a polarization-sensitive semiconductor optical amplifier (SOA). We derive analytical expressions for the carrier density, gain and phase evolution along the SOA and show how these expressions prove useful in optical signal processing applications. The propagation of counter-propagating pulses as well as pulse streams across SOAs have been analysed and expressions for energy gain have been derived in all these cases. We also show that our analytical results reduce to corresponding results of polarization insensitive SOAs already published. The analytical results are in excellent agreement with detailed numerical simulations done in MATLAB using the NIMROD portal. The analytical calculations lead to significant savings with regard to simulation time and processing capacity requirements. We further prove that the energy gain difference for counter-propagating pulse streams is directly proportional to the difference delay between in them and hence can be used as a measure of the delay difference. This theoretical result agrees well with experimental results. | IEEE Journal of Quantum Electronics, Vol. 42, No. 10, October 2006. | 2006 |  |
129 | Lee, B., Tang, M., Zhang, J., Soon, O. Y., Zheng, C., Arzberger, P., Abramson, D. | Analysis of Jobs in a Multi-Organizational Grid Test-bed The inevitable move from a single large scale server to a distributed Grid environment is beginning to be realized across international Grid test-bed like Pacific Rim Applications and Grid Middleware Assembly (PRAGMA). Although jobs submitted to a single large server have been widely analyzed, job characteristics in a Grid environment are different as we found in our analysis of jobs submitted in the PRAGMA Grid test-bed. This paper reports on the analysis of jobs submitted across the PRAGMA Grid test-bed. The job types are categorized and the runtime of jobs is captured, using the Multi-Organization Grid Accounting System (MOGAS), and analyzed. The number of jobs submitted across organizations, indicating the level of resource sharing among participants, is also captured by the system | 6th IEEE International Symposium on Cluster Computing and the Grid 16-19 May 2006, Singapore | 2006 |  |
130 | Zheng, C., Abramson, D., Arzberger, P., Ayyub, S., Enticott, C., Garic, S., Katz, M., Kwak, J., Lee, B. S., Papadopoulos, P., Phatanapherom, S., Sriprayoonsakul, S., Tanaka, Y., Tanimura, Y., Tatebe, O., Uthayopas, P. | The PRAGMA Testbed: Building a Multi-Application International Grid This practices and experience paper describes the coordination, design, implementation, availability, and performance of the Pacific Rim Applications and Grid Middleware Assembly (PRAGMA) Grid Testbed.Applications in high-energy physics, genome annotation, quantum computational chemistry, wildfire simulation, and protein sequence alignment have driven the middleware requirements, and the testbed provides a mechanism for international users to share software beyond the essential, de facto standard Globus core. In this paper, we describe how human factors, resource availability and performance issues have affected the middleware, applications and the testbed design. We also describe how middleware components in grid monitoring, grid accounting, grid Remote Procedure Calls, grid-aware file systems, and grid-based optimization have dealt with some of the major characteristics of our testbed. We also briefly describe a number of mechanisms that we have employed to make software more easily available to testbed administrators. | Workshop on Grid Testbeds, 6th IEEE International Symposium on Cluster Computing and the Grid 16-19 May 2006, Singapore | 2006 |  |
131 | Peachey, T. C. and Enticott, C. M. | Determination of the Best Constant in an Inequality of Hardy, Littlewood and Polya In 1934 Hardy, Littlewood, and Pólya generalized Hilbert’s inequality to the case in which the parameters are not conjugate. Determination of the best constant in this generalization is still an unsolved problem. An experimental approach is presented that yields numerical values that agree with theory in the cases in which an exact answer is known. The results may be a guide to a further theoretical approach. | Experimental Mathematics, v 15(1), pp 43-50, 2006. | 2006 |  |
132 | Baldridge, K. K., Sudholt, W., Greenberg, J., Amoreira, C., Potier, Y., Altintas, I., Birnbaum, A., Abramson, D. A., Enticott, C. M., Garic, S. | Cluster and grid infrastructure for computational chemistry and biochemistry Many computational chemists requiring significant and relatively flexible resources have turned to parallel clusters to solve increasingly complex problems. Evolving hardware technology and grid resources present new opportunities for chemistry and biology, yet introduce new complexity related to grid, web and computational difficulties. Here, we describe our experience in using the GAMESS quantum chemistry program on clusters and our utilization of evolving portal, grid and workflow technologies to solve problems that cannot be solved on individual machines. | in Parallel Computing for Bioinformatics and Computational Biology, eds Albert Y. Zomaya, John Wiley & Sons, NJ USA and Canada, pp. 531-550. | 2006 |  |
133 | Pillai, B., Premaratne, M., Abramson, D., Nirmalathas, A. and Lim, C. | Pulse Propagation in Polarization-sensitive Semiconductor Optical Amplifiers | Joint International Conferences on Optical Internet and Next Generation Network (COIN-NGNCON 2006), which Korea, July 9 – 13, 2006. | 2006 |  |
134 | Abramson, D. A., Amoreira, C., Baldridge, K. K., Berstis, L., Kondrick, C., Peachey, T. C. | A flexible grid framework for automatic protein-ligand docking Many important and fundamental questions in biology and biochemistry can be better understood through investigations performed at the protein-ligand or drug-receptor level. A variety of techniques have been used over the years, and it is an area of active research. In this paper we illustrate an approach that leverages a number of different computational chemistry approaches, and combines these with non-linear optimization algorithms and grid based high performance computing platforms. The result is a very flexible, high performance method of evaluating protein-ligand interaction algorithms. We illustrate the approach by evaluating a hybrid molecular modeling and quantum theoretical based algorithm. | Proceedings of the Second IEEE International Conference on e-Science and Grid Computing, Dec. 4- 6, 2006, IEEE Computer Society, Los Alamitos USA, pp. 1-8 | 2005 |  |
135 | Lewis, A. Abramson, D. and Peachey, T. | RSCS: A Parallel Simplex Algorithm for the Nimrod/O Optimization Toolset This paper describes a method of parallelisation of the popular Nelder-Mead simplex optimization algorithms that can lead to enhanced performance on parallel and distributed computing resources. A reducing set of simplex vertices are used to derive search directions generally closely aligned with the local gradient. When tested on a range of problems drawn from real world applications in science and engineering, this reducing set concurrent simplex (RSCS) variant of the Nelder-Mead algorithm compared favourably with the original algorithm, and also with the inherently parallel multidirectional search algorithm (MDS). All algorithms were implemented and tested in a general-purpose, grid-enabled optimization toolset. | Scientific Programming, 14(1), 1-11 (2006). IOS Press. | 2005 |  |
136 | Sudholt, W., Baldridge, K., Abramson, D., Enticott, C. and Garic, S. | Application of grid computing to parameter sweeps and optimizations in molecular modeling The ability to theoretically model chemical and biological processes is a key to understand nature and to predict experiments. Unfortunately, this type of computational modeling is very data and computation extensive. However, the worldwide computing grid can now provide the necessary resources. It is therefore a primary goal of current research to utilize these facilities. Here, we present a coupling of the GAMESS quantum chemical code to the Nimrod/G grid distribution tool. As an example, it is applied to the parameter scan of an effective group difference pseudopotential (GDP). This represents the initial step in the parameterization of a capping atom for hybrid quantum mechanics-molecular mechanics (QM/MM) calculations of complex molecular systems. The results give hints to the underlying physical forces of functional group distinctions and provide starting points for later parameter optimizations. The technology demonstrated here significantly extends the manageability of accurate, but costly quantum chemical calculations and is thus valuable for a wide range of applications which involve thousands of independent runs. | Future Generation Computer Systems, 21 (2005), 27-35. Also appeared in International Conference on Computational Sciences, ICCS04, Krakow Poland, June 6 – 9, 2004. | 2005 |  |
137 | Tan, J, Abramson, D. and Enticott, C. | Bridging Organizational Network Boundaries on the Grid The Grid offers significant opportunities for performing wide area distributed computing, allowing multiple organizations to collaborate and build dynamic and flexible virtual organisations. However, existing security firewalls often diminish the level of collaboration that is possible, and current Grid middleware often assumes that there are no restrictions on the type of communication that is allowed. Accordingly, a number of collaborations have failed because the member sites have different and conflicting security policies. In this paper we present an architecture that facilitates inter-organization communication using existing Grid middleware, without compromising the security policies in place at each of the participating sites. Our solutions are built on a number of standard secure communication protocols such as SSH and SOCKS. We call this architecture Remus, and will demonstrate its effectiveness using the Nimrod/G tools. | IEEE Grid 2005, Seattle, Nov 2005 | 2005 |  |
138 | Chin P.W., Lewis D. G. and Giddy J. P. | A Monte Carlo solution for external beam photon radiotherapy verification In this work, Monte Carlo (MC) simulations provide an answer to the surging clinical need for verifying complex radiation treatments. As will be demonstrated, this solution attained accuracy (2% in dose prediction) and versatility (over a wide range of clinical setups) known to be unachievable by other techniques. The solution is not impeded by long runtimes since it has been successfully implemented on the Grid. It can therefore be clinically productive. Implementation on the UK National Grid Service will be reported. This work also draws from MC simulation information beyond physical measurements, such as details about radiation interactions in a radiotherapy imager. Aided by this knowledge, we designed a simplified, substitute imager which reduces MC runtimes. It can also be useful for other dosimetric computation techniques where detailed modelling is unavailable. Additionally, we report a MC study on how a general assumption in non-MC techniques leads to inaccurate dose prediction. | The Monte Carlo Method: Versatility Unbounded in a Dynamic Computing World, Chattanooga, Tennessee, April 2005 pp 17-21. | 2005 |  |
139 | Abramson D., Peachey T. and Lewis A. | Model Optimization and Parameter Estimation with Nimrod/O Optimization problems where the evaluation step is computationally intensive are becoming increasingly common in both engineering design and model parameter estimation. We describe a tool, Nimrod/O, that expedites the solution of such problems by performing evaluations concurrently, utilizing a range of platforms from workstations to widely distributed parallel machines. Nimrod/O offers a range of optimization algorithms adapted to take advantage of parallel batches of evaluations. We describe a selection of case studies where Nimrod/O has been successfully applied, showing the parallelism achieved by this approach. | The International Conference on Computational Science, May 28-31, 2006, University of Reading, UK | 2005 |  |
140 | Abramson, D., Kommineni, J. and Altinas, I. | Flexible IO services in the Kepler Grid Workflow Tool Existing Grid workflow tools assume that individual components either communicate by passing files from one application to another, or are explicitly linked using interprocess communication pipes. When files are used it is usually difficult to overlap reader and write execution. On the other hand, interprocess communication primitives are invasive and require substantial modification to existing code. We have built a library, called GriddLeS, that combines the advantages of both approaches without any changes to the application code. GriddLeS overloads conventional file IO operations, and supports either local, remote or replicated file access, or direct communication over pipes. In this paper we discuss how GriddLeS can be combined with workflow packages and show how flexible and powerful virtual applications can be constructed rapidly. A large atmospheric science case study is discussed. | IEEE Conference on e-Science and Grid Computing, Melbourne, Dec 2005. | 2005 |  |
141 | Chan, P. and Abramson, D. | Netfiles: An Enhanced Stream-based Communication Mechanism Netfiles is an alternative API for message passing on distributed memory machines that is based on pipes. It provides enhanced capabilities such as broadcasts and gather operations. Because Netfiles overload conventional file I/O operations, parallel programs can be developed and tested on a file system before execution on a parallel machine. Netfiles is part of a parallel programming system called FAbrIC. This paper also presents the design and implementation of the FAbrIC architecture and demonstrate the effectiveness of this approach by means of two parallel applications: a parallel shallow water model application and parallel Jacobi method. | International Symposium on High Performance Computing (ISHPC), Higashikasugano, Nara City, Japan, Sept 7th – 9th, 2005 | 2005 |  |
142 | Abramson, D., Lynch, A., Takemiya, H., Tanimura, Y., Date, S., Nakamura, H., Jeong, K., Zhu, J., Lu, Z., Lee, H., Wang, C., Shih, H., Molina, T., Baldridge, K., Li, W. and Arzberger, P. | Deploying Scientific Applications to the PRAGMA Grid Testbed: Strategies and Lessons Recent advances in grid infrastructure and middleware development have enabled various types of applications in science and engineering to be deployed on the grid. The characteristics of these applications and the diverse infrastructure and middleware solutions developed, utilized or adapted by PRAGMA member institutes are summarized. The applications include those for climate modeling, computational chemistry, bioinformatics and computational genomics, remote control of instruments, and distributed databases. Many of the applications are deployed to the PRAGMA grid testbed in routine basis experiments. Strategies for deploying applications without modifications, and those taking advantage of new programming models on the grid are explored and valuable lessons learned are reported. Comprehensive end to end solutions from PRAGMA member institutes that provide important grid middleware components and generalized models of integrating applications and instruments on the grid are also described. | 6th IEEE International Symposium on Cluster Computing and the Grid 16-19 May 2006, Singapore. | 2005 |  |
143 | Buyya, R., Murshed, M, Abramson, D. and Venugopal, S. | Scheduling Parameter Sweep Applications on Global Grids: A Deadline and Budget Constrained Cost-Time Optimisation Algorithm Computational Grids and peer-to-peer (P2P) networks enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. The management and composition of resources and services for scheduling applications, however, becomes a complex undertaking. We have proposed a computational economy framework for regulating the supply of and demand for resources and allocating them for applications based on the users’ uality-of-service requirements. The framework requires economy-driven deadline- and budget constrained (DBC) scheduling algorithms for allocating resources to application jobs in such a way that the users’ requirements are met. In this paper, we propose a new scheduling algorithm, called the DBC cost–time optimization scheduling algorithm, that aims not only to optimize cost, but also time when possible. The performance of the cost–time optimization scheduling algorithm has been evaluated through extensive simulation and empirical studies for deploying parameter sweep applications on global Grids. Copyright c 2005 John Wiley & Sons, Ltd. | International Journal of Software: Practice and Experience (SPE), Volume 35, Issue 5, pp 491 – 512, Wiley Press, New York, USA, April 25, 2005. | 2005 |  |
144 | Tran, N, Abramson, D., Mingins, C. | Call-Ordering Constraints Several kinds of call-ordering problems have been identified, all of which present subtle difficulties in ensuring the correctness of a sequential program. They include object protocols, synchronisation patterns and re-entrance restrictions. This paper presents callordering constraints as a unifying solution to these problems. These constraints are new classes of contracts in addition to traditional preconditions, postconditions and invariants. They extend the traditional notion of behavioural subtyping. The paper shows how constraint inheritance can almost ensure behavioural subtyping conformance. The paper also shows how these constraints may be monitored at run time. Callordering constraints are included in the BECON contract system, which has been implemented on the Common Language Infrastructure (CLI). | 12th ASIA-PACIFIC Software Engineering Conference (APSEC 2005), Taipei , Taiwan, December 15-17, 2005 | 2005 |  |
145 | Buyya, R., Date, S., Mizuno-Matsumoto, Y., Venugopal, S. and Abramson, D. | Neuroscience Instrumentation and Distributed Analysis of Brain Activity Data: A Case for eScience on Global Grids | Journal of Concurrency and Computation: Practice and Experience (CCPE), Volume 17, No. 15, 1783-1798pp, Wiley Press, New York, USA, Dec. 2005. | 2005 |  |
146 | Buyya, R., Abramson, D. and Venugopal, S. | The Grid Economy This paper identifies challenges in managing resources in a Grid computing environment and proposes computational economy as a metaphor for effective management of resources and application scheduling. It identifies distributed resource management challenges and requirements of economy-based Grid systems, and discusses various representative economy-based systems, both historical and emerging, for cooperative and competitive trading of resources such as CPU cycles, storage, and network bandwidth. It presents an extensible, service-oriented Grid architecture driven by Grid economy and an approach for its realization by leveraging various existing Grid technologies. It also presents commodity and auction models for resource allocation. The use of commodity economy model for resource management and application scheduling in both computational and data grids is also presented. | Special Issue on Grid Computing, Proceedings of the IEEE, Manish Parashar and Craig Lee (editors), Volume 93, Issue 3, 698-714pp, IEEE Press, New York, USA, March 2005 | 2004 |  |
147 | Goscinski, W and Abramson, D. | Legacy Application Deployment over Heterogeneous Grids using Distributed Ant The construction of large scale e-Science grid experi-ments presents a challenge to e-Scientists because of the inherent difficulty of deploying applications over large scale heterogeneous grids. In spite of this, user-oriented application deployment has remained unsupported in grid middleware. This lack of support for application deploy-ment is strongly detrimental to the usability, evolution, uptake and continual development of the grid. This paper presents our motivation, design and implementation of the Distributed Ant user-oriented application deployment sys-tem, including recent extensions to support application deployment over heterogeneous grids. We also present a significant Distributed Ant deployment case study, demon-strating how a user-oriented application deployment system enables e-Science experiments. | IEEE Conference on e-Science and Grid Computing, Melbourne, Dec 2005 | 2004 |  |
148 | Abramson, D. and Kommineni, J. | A Flexible IO Scheme for Grid Workflows Computational Grids have been proposed as the next generation computing platform for solving large-scale problems in science, engineering, and commerce. There is an enormous amount of interest in applications, called Grid Workflows in which a number of otherwise independent programs are run in a “pipeline”. In practice, there are a number of different mechanisms that can be used to couple the models, ranging from loosely couple file based IO to tightly coupled message passing. In this paper we propose a flexible IO architecture that provides a wide range of mechanisms for building Grid Workflows without the need for any source code modification and without the need to fix them at design time. Further, the architecture works with legacy applications. We evaluate the performance of our prototype system using a workflow in computational mechanics. | . IPDPS-04, Santa Fe, New Mexico, April 2004 | 2004 |  |
149 | Sudholt, W., Baldridge, K., Abramson, D., Enticott, C. and Garic, S. | Parameter Scan of an Effective Group Difference Pseudopotential Using Grid Computing Computational modeling in the health sciences is still very challenging and much of the success has been despite the difficulties involved in integrating all of the technologies, software, and other tools necessary to answer complex questions. Very large-scale problems are open to questions of spatio-temporal scale, and whether physico-chemical complexity is matched by biological complexity. For example, for many reasons, many large-scale biomedical computations today still tend to use rather simplified physics/chemistry compared with the state of knowledge of the actual biology/biochemistry. The ability to invoke modern grid technologies offers the ability to create new paradigms for computing, enabling access of resources which facilitate spanning the biological scale. | New Generation Computing 22 (2004) 125-135 | 2004 |  |
150 | Goscinski, W. and Abramson, D. | Distributed Ant: A System to Support Application Deployment in the Grid e-Science has much to benefit from the emerging field of grid computing. However, construction of e-Science grids is a complex and inefficient undertaking. In particular, deployment of user applications can present a major challenge due to the scale and heterogeneity of the grid. In spite of this, deployment is not supported by current grid computing middleware or configuration management systems, which focus on a super-user approach to application management. Hence, individual users with limited resource control deploy applications manually which is not a grid scalable solution. This paper presents our motivation, design and implementation of a grid scalable, user-oriented, secure application deployment system, Distributed Ant (DistAnt). DistAnt extends the Ant build file environment to provide a flexible procedural deployment description and implements a set of deployment services. | IEEE Grid 2004, November, 2004, Pittsburgh, PA | 2004 |  |
151 | Ho, T. and Abramson, D. | The GriddLeS Data Replication Service The Grid provides infrastructure that allows an arbitrary application to be executed on a range of different computational resources. When input files are very large, or when fault tolerance is important, the data may be replicated. Existing Grid data replication middleware suffers from two shortcomings. First, it typically requires modification to existing applications. Second, there is no automatic resource selection and a user must choose the replica manually to optimize the performance of the system. In this paper we discuss a middleware layer called the GriddLeS Replication Service (GRS) that sits above existing replication services, solving both of these shortcomings. Two case studies are presented that illustrate the effectiveness of the approach. | IEEE Conference on e-Science and Grid Computing, Melbourne, Dec 2005. | 2004 |  |
152 | Abramson, D., Dongarra, J., Meek, E., Roe, P and Shi, Z. | Simplified Grid Computing through Spreadsheets and NetSolve Grid computing has great potential but to enter the mainstream it must be simplified. Tools and libraries must make it easier to solve problems by being simpler and at the same time more sophisticated. In this paper we describe how Grid computing can be achieved through spreadsheets. No parallel programming or complex tools need to be used. So long as dependencies allow it, formulae in a spreadsheet can be evaluated concurrently on the Grid. Thus Grid computing becomes accessible to all those who can use a spreadsheet. The story is completed with a sophisticated backend system, NetSolve, which can solve complex linear algebra systems with minimal intervention from the user. In this paper we present the architecture of the system for performing such simple yet sophisticated grid computing and a case study which performs a large singular value decomposition. | Omiya Sonic City, Saitama, Tokyo, Japan, July 20-22, 2004, HPC Asia 2004 | 2004 |  |
153 | Lewis, A., Abramson, D. and Peachey, T. | RSCS: A Parallel Simplex Algorithm for the Nimrod/O Optimization Toolset This paper describes a method of parallelisation of the popular Nelder-Mead simplex optimization algorithms that can lead to enhanced performance on parallel and distributed computing resources. A reducing set of simplex vertices are used to derive search directions generally closely aligned with the local gradient. When tested on a range of problems drawn from real world applications in science and engineering, this reducing set concurrent simplex (RSCS) variant of the Nelder-Mead algorithm compared favourably with the original algorithm, and also with the inherently parallel multidirectional search algorithm (MDS). All algorithms were implemented and tested in a general-purpose, grid-enabled optimization toolset. | International Symposium on Parallel and Distributed Computing, In association with HeteroPar’04, July 5th – 8th 2004, University College Cork, Ireland. | 2004 |  |
154 | Jones R., Peng D., Chaperon P., Tan M., Abramson D. and Peachey T. | Structural Optimization with Damage Tolerance Constraints Recent developments in cluster computing and structural optimisation offer the potential for significant improvements in the design of more durable structures, and for producing optimum fatigue life extension rework profiles. The present paper reveals the importance of non-destructive inspection (NDI) and the role it plays in determining optimum structural rework geometries. This finding is important since when performing stress based optimisation the interrelationship of NDI and optimum rework geometries is not immediately apparent. The problems studied also reveal the flatness of the solution space and, unlike stress based optimisation, the existence of multiple maxima. Such solution spaces represent a major challenge for any optimisation procedure. To this end we explore the advantages of using the NIMROD optimisation suite of programs in conjunction with A cluster computer architecture and the highlight the benefit of visualizing the solution space. | Theoretical and Applied Fracture Mechanics, 43, 2005, pp. 133-155. | 2004 |  |
155 | Russel, A.B.M. and Bevinakoppa, S. | Predicting the Performance of Data Transfer in a Grid Environment In a Grid environment, only implementing a parallel algorithm for data transfer or multiple parallel jobs allocation doesn’t give reliable data transfer. There is a need to predict the data transfer performance before allocating the parallel processes on grid nodes. A predictive framework will be a solution in this scenario. In this paper we propose a predictive framework for performing efficient data transfer. Our framework considers different phases for providing information about efficient and reliable participating nodes in a computational Grid environment. Our experimental results reveal that multivariable predictors provide better accuracy compared to univariable predictors. We observe that the Neural Network prediction technique provides better prediction accuracy compared to the Multiple Linear Regression and Decision Regression. Our proposed ranking factor overcomes the problem of considering fresh participating nodes in data transfer. | In Chin-Sheng Chen, Joaquim Filipe, Isabel Seruca, Jos√© Cordeiro, editors, ICEIS 2005, Proceedings of the Seventh International Conference on Enterprise Information Systems, Miami, USA, May 25-28, 2005. pages 176-181, 2005. | 2004 |  |
156 | Kommineni, J and Abramson, D. | Building Virtual Applications for the GRID with Legacy Components | | 2004 |  |
157 | Beasley, J.E., Krishnamoorthy, M., Sharaiha, Y.M. and Abramson, D. | Displacement problem and dynamically scheduling aircraft landings | J. Opl Res.Soc. 55 (2004) 54-64. | 2004 |  |
158 | Nam, T., Abramson, D. and Mingins, C. | A Taxonomy of Call Ordering Problems | APSEC2004, Nov. 30 -Dec. 3, 2004, Busan, Korea | 2004 |  |
159 | Searle, A, Gough, J. K. and Abramson, D. A. | DUCT: An Interactive Define-Use Chain Navigation Tool for Relative Debugging This paper describes an interactive tool that facilitates following define-use chains in large codes. The motivation for the work is to support “relative debugging”, where it is necessary to iteratively refine a set of assertions between different versions of a program. DUCT is novel because it exploits the Microsoft Inter-mediate Language (MSIL) that underpins the .NET Framework. Accordingly, it works on a wide range of programming languages without any modification. The paper describes the design and implementation of DUCT, and then illustrates its use with a small case study. | AADebug‚Äô03. Ghent, Belgium, September 2003 | 2003 |  |
160 | Smith, K. A., Abramson, D. and Duke, D. | Hopfield Neural Networks for Timetabling: Formulations, Methods, and Comparative Results This paper considers the use of discrete Hopfield neural networks for solving school timetabling problems. Two alternative formulations are provided for the problem: a standard Hopfield-Tank approach, and a more compact formulation which allows the Hopfield network to be competitive with swapping heuristics. It is demonstrated how these formulations can lead to different results. The Hopfield network dynamics are also modified to allow it to be competitive with other metaheuristics by incorporating controlled stochasticities. These modifications do not complicate the algorithm, making it possible to implement our Hopfield network in hardware. The neural network results are evaluated on benchmark data sets and are compared to results obtained using greedy search, simulated annealing and tabu search | Computers & Industrial Engineering, 44 (2003), pp 283 – 305 | 2003 |  |
161 | Lewis, A., Abramson, D. and Peachey, T. | An Evolutionary Programming Algorithm for Automatic Engineering Design, PPAM 2003, Fifth International Conference on Parallel Processing and Applied Mathematics, Czestochowa, Poland, Lecture Notes in Computer Science, Volume 3019 / 2004, pp. 586 – 594, ISBN: 3-540-21946-3, September 7-10, 2003 This paper describes a new Evolutionary Programming algorithm based on Self-Organised Criticality. When tested on a range of problems drawn from real-world applications in science and engineering, it performed better than a variety of gradient descent, direct search and genetic algorithms. It proved capable of delivering high quality results faster, and is simple, robust and highly parallel. | | 2003 |  |
162 | Peachey, T., Abramson, D., Lewis, A., Kurniawan, D. and Jones, R. | Optimization using Nimrod/O and its Application to Robust Mechanical Design We describe the Nimrod/O distributed optimization tool and its application to a problem in mechanical design. The problem is to determine the shape for a hole in a thin plate under load that gives optimal life in the presence of flaws. The experiment reveals two distinct design strategies for optimizing this life. Nimrod/O is able to find both of these rapidly due to its inherent parallelism. | PPAM 2003, Fifth International Conference on Parallel Processing and Applied Mathematics, Czestochowa, Poland, Lecture Notes in Computer Science, Volume 3019 / 2004, pp. 730 – 737, ISBN: 3-540-21946-3, September 7-10, 2003 | 2003 |  |
163 | Abramson, D., Finkel, R., Kurniawan, D., Kowalenko, V. and Watson, G. | Parallel Relative Debugging with Dynamic Data Structures This paper discusses the use of “relative debugging” as a technique for locating errors in a program that has been ported or developed using evolutionary software engineering techniques. It works on the premise that it is possible to find errors by comparing the contents of key data structures at run time between a “working” version and the new code. Previously, our reference implementation of relative debugging, called Guard, only supported comparison of regular data structures like scalars, simple structures and arrays. Recently, we augmented Guard enabling it to compare dynamically allocated structures like linked lists. Such comparisons are complex because the raw values of pointers cannot be compared directly. Here we describe the changes that were required to support dynamic data types. The functionality is illustrated in a small case study, in which a parallel particle code behaves differently as the number of processors is altered. | 16th International Conference on Parallel and Distributed Computing Systems, pp 22 ‚Äì 29, August 13 – 15, 2003 Reno, Nevada, USA | 2003 |  |
164 | Tran, N., Mingins, C., and Abramson, D. | Managed Assertions for Component Contracts’, Proceedings of the Seventh World Conference on Integrated Design and Process Technology (IDPT-2003), 3-5 December 2003, Austin, Texas Assertions are a well established mechanism for the specification and verification of program semantics in the forms of pre-conditions, post-conditions and invariants of object and component interfaces. Traditionally, assertions are typically specific to individual programming languages. The ECMA Common Language Infrastructure (CLI) provides a shared dynamic execution environment for implementation and interoperation of multiple languages. The authors extend the CLI with support for assertions, in the Design by Contract style, in a language-agnostic manner. Their design is flexible and powerful in that it treats assertions as first class constructs in both the binary format and in the run-time while leaving the source level specification choices completely open. The design also enforces behavioural sub-typing and object re-entrance rules, and provides sensible exception handling. The implementation of run-time monitoring in Microsoft’s Shared Source CLI (a.k.a. Rotor) integrates with the dynamic run-time, performing just-in-time code weaving in a novel way to maximise efficiency while operating at the platform-neutral level. | | 2003 |  |
165 | Tran, N., Mingins, C., and Abramson, D. | Design and Implementation of Assertions for the Common Language Infrastructure Behavioral specifications in interface contracts are important measures for improving quality of software components. Binary components of different language origins need a common understanding of behavioral contracts to work effectively in component-based systems. We propose a system by which behavioral specifications in the spirit of Design by ContractTM can accompany binary components and be available at runtime to enable flexible and correct treatment, in a language neutral manner. Behavioral contracts written in different specification notations, for components written in different programming languages, have common runtime semantics. Runtime monitoring is correct with regards to behavioral subtyping, object re-entrance and exception handling. The mechanism also enables runtime reflection of contract constructs. We show our design for this system, as an extension to the Common Language Infrastructure (CLI) standardized by ECMA. We also describe our prototype implementation based on Microsoft’s Shared Source CLI. | IEE Proceedings – Software, 150(5), October 2003, pp. 329-336 | 2003 |  |
166 | Lewis, A. and Abramson, D. | An Evolutionary Programming Algorithm for Multi-Objective Optimisation This paper describes a new Evolutionary Programming optimisation algorithm and a method of its application to multi-objective optimisation problems. Computational results are presented demonstrating the algorithm’s ability to find Pareto-optimal solutions for a real-world problem in radio-frequency component design. | Proc. 2003 Congress on Evolutionary Computation (CEC 2003), vol. 3, pp. 1926-1932, Canberra, Dec. 2003 | 2003 |  |
167 | Abramson, D., Barak, A and Enticott, C. | Job Management in Grids of MOSIX Clusters EnFuzion and MOSIX are two packages that represent different approaches to cluster management. EnFuzion is a user-level queuing system that can dispatch a predetermined number of processes to a cluster. It is a commercial version of Nimrod, a tool that supports parameter sweep applications on a variety of platforms. MOSIX, on the other hand, is operating system (kernel) level software that supports preemptive process migration for near optimal, cluster-wide resource management, virtually makings the cluster run like an SMP. Traditionally, users either use EnFuzion with a conventional cluster operating system, or MOSIX without a queue manager. This paper presents a Grid management system that combines EnFuzion with MOSIX for efficient management of processes in multiple clusters. We present a range of experiments that demonstrate the advantages of such a combination, including a real world case study that distributed a computational model of a solar system | 16th International Conference on Parallel and Distributed Computing Systems, pp 36 ‚Äì 42, August 13 – 15, 2003 Reno, Nevada, USA | 2003 |  |
168 | Buyya, R, Date, S., Mizuno-Matsumoto, Y., Venugopal, S and Abramson, D. | Composition of Distributed Brain Activity Analysis and its On-Demand Deployment on Global Grids, New Frontiers in High-Performance Computing: Proceedings of the 10th International Conference on High Performance Computing (HiPC 2003) Workshops (Dec. 17, 2003, Hyderabad, India), ISBN: 81-88901-05-9, Elite Publishing House, New Delhi, India The distribution of knowledge (by scientists) and data sources (advanced scientific instruments), and the need of large-scale computational resources for analyzing massive scientific data are two major problems commonly observed in scientific disciplines. The two popular scientific disciplines of this nature are brain science and high-energy physics. The analysis of brain activity data gathered from the MEG (Magnetoencephalography) instrument is an important research topic in medical science since it helps doctors in identifying symptoms of diseases. The data needs to be analyzed exhaustively to efficiently diagnose and analyze brain functions and requires access to large-scale computational resources. The potential platform for solving such resource intensive applications is the Grid. This paper presents the design and development of MEG data analysis system by leveraging Grid technologies, primarily Nimrod-G, Gridbus, and Globus. It describes the composition of the neuroscience (brain activity analysis) application as parameter-sweep application and its on-demand deployment on Global Grids for distributed execution. | | 2003 |  |
169 | Buyya, R., Branson, K., Giddy, J. Abramson, D. | The Virtual Laboratory: Enabling On Demand Drug Design with the Worldwide Grid Computational Grids are emerging as a new paradigm for sharing and aggregation of geographically distributed resources for solving large-scale compute and data intensive problems in science, engineering, and commerce. However, application development, resource management and scheduling in these environments is a complex undertaking. In this paper, we illustrate the development of a virtual laboratory environment by leveraging existing Grid technologies to enable molecular modelling for drug design on geographically distributed resources. It involves screening millions of compounds in the chemical database (CDB) against a protein target to identify those with potential use for drug design. We have used the Nimrod-G parameter specification language to transform the existing molecular docking application into a parameter sweep application for executing on distributed systems. We have developed new tools for enabling access to ligand records/molecules in the CDB from remote resources. The Nimrod-G resource broker along with molecule CDB data broker is used for scheduling and on-demand processing of docking jobs on the World-Wide Grid (WWG) resources. The results demonstrate the ease of use and power of the Nimrod-G and virtual laboratory tools for grid computing. | Concurrency and Computation: Practice and Experience, 15(1), 2003 | 2003 |  |
170 | Abramson, D and Watson, G. | Debugging Scientific Applications in the .NET Framework The Microsoft .NET Framework represents a major advance over previous runtime environments available for Windows platforms and offers a number of architectural features that would be of value in scientific programs. However there are such major differences between.NET and legacy environments under both Windows and Unix, that the effort of migrating software is substantial. Accordingly, software migration is unlikely to occur unless tools are developed for supporting this process. In this paper we discuss a ‘relative debugger’ called Guard which provides powerful support for debugging programs as they are ported from one environment or platform to another. We describe a prototype implementation developed for Microsoft’s Visual Studio.NET - a rich interactive environment that supports code development for the .NET Framework. The paper discusses the overall architecture of Guard under VS.NET and highlights some of the technical challenges that were encountered during its development. A simple case study is provided that demonstrates the effectiveness of relative debugging in locating subtle errors that occur when even a minor upgrade is attempted from one version of a language to another. For this example, we illustrate the use of relative debugging using a Visual Basic program that was ported from Visual Basic 6.0 to Visual Basic .NET. | Future Generation Computer Systems, Vol. 19 issue 5, June 2003., pp 665 – 678 | 2003 |  |
171 | Abramson, D, Buyya, R. and Giddy, J. | A Computational Economy for Grid Computing and its Implementation in the Nimrod-G Resource Broker Computational Grids that couple geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service. Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user defined quality of service requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength Grids. We discuss results of preliminary experiments on scheduling some parametric computations using the Nimrod-G resource broker on a world-wide grid testbed that spans five continents. | Future Generation Computer Systems. Volume 18, Issue 8, Oct-2002. | 2002 |  |
172 | Abramson, D., Watson, G. and Dung, L. | Guard: A Tool for Migrating Scientific Applications to the .NET Framework For many years, Unix has been the platform of choice for the development and execution of large scientific programs. The new Microsoft .NET Framework represents a major advance over previous runtime environments available in Windows platforms, and offers a number of architectural features that would be of value in scientific programs. However, there are such major differences between Unix and .NET under Windows, that the effort of migrating software is substantial. Accordingly, unless tools are developed for supporting this process, software migration is unlikely to occur. In this paper we discuss a 'relative debugger' called Guard, which provides powerful support for debugging programs as they are ported from one platform to another. We describe a prototype implementation developed for Microsoft's Visual Studio.NET, a rich interactive environment that supports code development for the .NET Framework. The paper discusses the overall architecture of Guard under VS.NET, and highlights some of the technical challenges that were encountered. | 2002 International Conference on Computational Science (ICCS 2002), Amsterdam, The Netherlands, April 21st 2002, pp 834 – 843 | 2002 |  |
173 | Buyya, R, Abramson, D. Giddy, J and Stockinger, H. | Economic Models for Resource Management and Scheduling in Grid Computing The accelerated development in Peer-to-Peer (P2P) and Grid computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations or peers. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. This framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price of services based on supply-and-demand and their value to the user. They include commodity market, posted price, tender and auction models. In this paper, we discuss the use of these models for interaction between Grid components to decide resource service value, and the necessary infrastructure to realize each model. In addition to usual services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. We briefly discuss existing technologies that provide some of these services and show their usage in developing the Nimrod-G grid resource broker. Furthermore, we demonstrate the effectiveness of some of the economic models in resource trading and scheduling using the Nimrod/G resource broker with deadline and cost constrained scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that has resources distributed across five continents | Journal of Concurrency: Practice and Experience, Grid computing special issue 14/13-15, 2002, pp 1507 – 1542 | 2002 |  |
174 | Buyya, R, Murshed, M., and Abramson, D. | A Deadline and Budget Constrained Cost-Time Optimization Algorithm for Scheduling Task Farming Applications on Global Grids Computational Grids and peer-to-peer (P2P) networks enable the sharing, selection, and aggregation of geographically distributed resources for solving large-scale problems in science, engineering, and commerce. The management and composition of resources and services for scheduling applications, however, becomes a complex undertaking. We have proposed a computational economy framework for regulating the supply and demand for resources and allocating them for applications based on the users' quality of services requirements. The framework requires economy driven deadline and budget constrained (DBC) scheduling algorithms for allocating resources to application jobs in such a way that the users' requirements are met. In this paper, we propose a new scheduling algorithm, called DBC cost-time optimisation, which extends the DBC cost-optimisation algorithm to optimise for time, keeping the cost of computation at the minimum. The superiority of this new scheduling algorithm, in achieving lower job completion time, is demonstrated by simulating the World-Wide Grid and scheduling task-farming applications for different deadline and budget scenarios using both this new and the cost optimisation scheduling algorithms. | The 2002 International Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas, Nevada, USA, June 2002 | 2002 |  |
175 | Abramson, D., Roe, P., Kotler L and Mather, D. | ActiveSheets: Super-Computing with Spreadsheets Spreadsheets are important business tools. Increasingly they are being used for simulation e.g. to perform risk analysis. Such tasks have far greater computational demands than traditional spreadsheet bookkeeping applications. In this paper we show how spreadsheets can support supercomputing. This is achieved without requiring the spreadsheet user to have specialist tools or knowledge. The key technical innovation is a mechanism enabling the concurrent evaluation of spreadsheet functions. Furthermore the mechanism does not require modification of the standard spreadsheet evaluation engine | . 2001 High Performance Computing Symposium (HPC’01), Advanced Simulation Technologies Conference, April 22-26, 2001, pp 110 ‚Äì 115, Seattle, Washington (USA). | 2001 |  |
176 | Abramson, D, Lewis, A. and Peachy, T. | Case Studies in Automatic Design Optimisation using the P-BFGS Algorithm In this paper we consider a number of real world case studies using an automatic design optimisation system called Nimrod/O. The case studies include a photo-chemical pollution model, two different simulations of the strength of a mechanical part and the radio frequency properties of a ceramic bead. In each case the system is asked to minimise an objective function that results from the execution of a time consuming computational model. We compare the performance of an exhaustive search technique with a new non-linear gradient descent algorithm called P-BFGS. The exhaustive search results are produced using enFuzion, a commercial version of the parametric execution software Nimrod. P-BFGS is a parallel variant of the well-known BFGS algorithm and has been tested on a 64 processor Pentium cluster. The results show that P-BFGS can achieve a speedup when compared to the exhaustive search on 3 out of the 4 problems. In addition, it always uses fewer processors than an exhaustive search. | 2001 High Performance Computing Symposium (HPC’01), Advanced Simulation Technologies Conference, April 22-26, 2001, pp 104 ‚Äì 109, Seattle, Washington (USA). | 2001 |  |
177 | Buyya, R, Abramson, D., and Giddy, J. | A Case for Economy Grid Architecture for Service Oriented Grid Computing Computational Grids are a promising platform for executing large-scale resource intensive applications. However, resource management and scheduling in the Grid environment is a complex undertaking as resources are (geographically) distributed, heterogeneous in nature, owned by different individuals or organizations with their own policies, have different access and cost models, and have dynamically varying loads and availability. This introduces a number of challenging issues such as site autonomy, heterogeneous interaction, policy extensibility, resource allocation or co-allocation, online control, scalability, transparency, resource brokering, and computational economy. A number of Grid systems (such as Globus and Legion) have addressed many of these issues with exception of a computational economy. We argue that a computational economy is required in order to create a real world scalable Grid because it provides a mechanism for regulating the Grid resources demand and supply. It offers incentive for resource owners to be part of the Grid and encourages consumers to optimally utilize resources and balance timeframe and access costs. We propose a computational economy framework that builds on the existing Grid middleware systems and offers an infrastructure for resource management and trading in the Grid environment. We discuss the usage economic models for resource trading in the Nimrod/G resource broker and present deadline and cost-based scheduling experimental results on the Grid. | 10th Heterogeneous Computing Workshop April 23, 2001 in conjunction with IPDPS in San Francisco, California | 2001 |  |
178 | Buyya, R., Stockinger, H., Giddy, J. and Abramson, D. | Economic Models for Management of Resources in Peer-to-Peer and Grid Computing The accelerated development in Grid and peer-to-peer computing has positioned them as promising next generation computing platforms. They enable the creation of Virtual Enterprises (VE) for sharing resources distributed across the world. However, resource management, application development and usage models in these environments is a complex undertaking. This is due to the geographic distribution of resources that are owned by different organizations. The resource owners of each of these resources have different usage or access policies and cost models, and varying loads and availability. In order to address complex resource management issues, we have proposed a computational economy framework for resource allocation and for regulating supply and demand in Grid computing environments. The framework provides mechanisms for optimizing resource provider and consumer objective functions through trading and brokering services. In a real world market, there exist various economic models for setting the price for goods based on supply-and-demand and their value to the user. They include commodity market, posted price, tenders and auctions. In this paper, we discuss the use of these models for interaction between Grid components in deciding resource value and the necessary infrastructure to realize them. In addition to normal services offered by Grid computing systems, we need an infrastructure to support interaction protocols, allocation mechanisms, currency, secure banking, and enforcement services. Furthermore, we demonstrate the usage of some of these economic models in resource brokering through Nimrod/G deadline and cost-based scheduling for two different optimization strategies on the World Wide Grid (WWG) testbed that contains resources located on five continents: Asia, Australia, Europe, North America, and South America. | Technical Track on Commercial Applications for High-Performance Computing, SPIE International Symposium on The Convergence of Information Technologies and Communications (ITCom 2001), August 20-24, 2001, Denver, Colorado, USA | 2001 |  |
179 | Randall, M. and Abramson, D. | A General Meta-Heuristic Based Solver for Combinatorial Optimisation Problems In recent years, there have been many studies in which tailored heuristics and meta-heuristics have been applied to specific optimisation problems. These codes can be extremely efficient, but may also lack generality. In contrast, this research focuses on building a extit[general-purpose] combinatorial optimisation problem solver using a variety of meta-heuristic algorithms including Simulated Annealing and Tabu Search. The system is novel because it uses a modelling environment in which the solution is stored in dense dynamic list structures, unlike a more conventional sparse vector notation. Because of this, it incorporates a number of neighbourhood search operators that are normally only found in tailored codes and it performs well on a range of problems. The general nature of the system allows a model developer to rapidly prototype different problems. The new solver is applied across a range of traditional combinatorial optimisation problems. The results indicate that the system achieves good performance in terms of solution quality and runtime. | Combinatorial Optimisation and Applications (COAP)., 20, 2001, pp 185 – 210. | 2001 |  |
180 | Chan, P. and Abramson, D. | NetFiles: A Novel Approach to Parallel Programming of Master/Worker Applications We propose a new approach to parallel programming of master-worker applications using an abstraction for interprocesscommunication called NetFiles. The programmer writes the parallel application as a collection of sequential modules that communicate with each other by reading and/or writing files. When run on a sequential machine, the data is written to and read from conventional files. But, when run on a parallel platform, these file operations are actually implemented using message passing. Our approach makes it possible to develop a parallel program in two phases. In the first phase, the user is concerned with how to decompose the data and code, buthow to decompose the data and code, but without concern for the details of the parallel platform. In the second phase, the program can be configured to run on a parallel environment with little or no modification to the sequential code. We demonstrate the effectiveness of our approach on two parallel master/worker programs: parallel matrix multiplication and parallel genetic algorithms. | HPC Asia 2001, 24-28 September 2001 • Royal Pines Resort Gold Coast, Queensland, Australia | 2001 |  |
181 | Watson, G., Abramson, D. | Parallel Relative Debugging for Distributed Memory Applications: A Case Study Relative debugging is a technique that addresses the problem of debugging programs that have been developed using evolutionary software techniques. Recent developments allow relative debugging to be used on programs that have been ported from serial to parallel architectures or between different parallel architectures. Such programs may change significantly in the porting process and this raises many issues for the debugging methodology. This paper examines the use of relative debugging on a distributed memory application in which errors were introduced when the code was ported from a sequential to a parallel architecture. Our debugger, GUARD-2000, is used to compare key data structures between the two codes even though the parallel data structure has undergone significant reorganisation when mapped onto a distributed memory platform. We show how this technique can quickly and accurately pinpoint the source of errors in the parallel code. | International Conference on Parallel and Distributed Processing Techniques and Applications June 25-28, 2001, Las Vegas, Nevada, USA | 2001 |  |
182 | Abramson D, Lewis A, Peachey T, Fletcher, C. | An Automatic Design Optimization Tool and its Application to Computational Fluid Dynamics In this paper we describe the Nimrod/O design optimization tool, and its application in computational fluid dynamics. Nimrod/O facilitates the use of an arbitrary computational model to drive an automatic optimization process. This means that the user can parameterise an arbitrary problem, and then ask the tool to compute the parameter values that minimize or maximise a design objective function. The paper describes the Nimrod/O system, and then discusses a case study in the evaluation of an aerofoil problem. The problem involves computing the shape and angle of attack of the aerofoil that maximises the lift to drag ratio. The results show that our general approach is extremely flexible and delivers better results than a program that was developed specifically for the problem. Moreover, it only took us a few hours to set up the tool for the new problem and required no software development. | SuperComputing 2001, Denver, Nov 2001. | 2001 |  |
183 | Abramson, D. | Parallel Execution Mechanism for Spreadsheets | Australian Provision Patent PQ8365/01. US Patent lodged (Number 20010056440). | 2001 |  |
184 | Abramson, D. | A Method of Eliminating Lens Flare in Front Projection Video Conference Facilities | Australian Provision Patent PR0553/01. | 2001 |  |
185 | Gedge, R. and Abramson, D. | The Virtual Tea Room – experiences with a new type of social space This paper focuses on technical, social and psychological aspects of the use of a video wall for communications between two geographically separated halves of a university department. The overall usability of the system is related to technological limitations such as video and audio quality, and aspects connected with organizational and interpersonal psychology. A video wall creates a new type of social space, raising new social psychology issues, and requiring new types of interpersonal behaviour . Whilst it has some close parallels with standard desk top video conferencing, it also has some unique and important differences. This paper briefly summarizes those problems and discusses the issues that are shared with other electronic collaboration tools such as desktop video conferencing, followed by a discussion of those issues unique to the video wall. Ongoing trials are being conducted with alterations to audio and video quality, and an analysis is being performed of usage of and responses to the video wall. | to appear, 7th International Workshop on Groupware, 6-8 September 2001, Darmstadt, Germany | 2001 |  |
186 | Buyya, R., Giddy, J. and Abramson, D. | An Evaluation of Economy-based Resource Trading and Scheduling on Computational Power Grids for Parameter Sweep Applications Computational Grids are becoming attractive and promising platforms for solving large-scale (problem solving) applications of multi-institutional interest. However, the management of resources and scheduling computations in the Grid environment is a complex undertaking as they are (geographically) distributed, heterogeneous in nature, owned by different individuals or organisations with their own policies, different access and cost models, and have dynamically varying loads and availability. This introduces a number of challenging issues such as site autonomy, heterogeneous substrate, policy extensibility, resource allocation or co-allocation, online control, scalability, transparency, and economy of computations. Some of these issues are being addressed by system-level Grid middleware toolkits such as Globus. Our work in general focuses on economy/market driven resource management architecture for the Grid; and in particular on resource brokering and scheduling through a user-level middleware system called Nimrod/G and economy of computations through a system-level middleware infrastructure called GRACE (GRid Architecture for Computational Economy). Nimrod/G supports modeling of a large-scale parameter study simulations (parameter sweep applications) through a simple declarative language or GUI and their seamless execution on global computational Grids. It uses GRACE services for identifying and negotiating low cost access to computational resources. The Nimrod/G adaptive scheduling algorithms help in minimising the time and/or the cost of computations for user defined constraints. These algorithms are evaluated in different scenarios for their effectiveness for scheduling parameter sweep applications in Grid environments such as GRACE and core middleware (Globus, Legion, and/or Condor-G) enabled federated Grids. | Workshop on Active Middleware Services (AMS 2000), (in conjuction with Ninth IEEE International Symposium on High Performance Distributed Computing), Kluwer Academic Press, August 1, 2000, Pittsburgh, USA | 2000 |  |
187 | Abramson, D., Kommineni, J., McGregor, J. and Katzfey, J. | An Atmospheric Sciences Workflow and its Implementation with Web Services Computational and data Grids couple geographically distributed resources such as high performance computers, workstations, clusters, and scientific instruments. Grid Workflows consist of a number of components, including: computational models, distributed files, scientific instruments and special hardware platforms. In this paper, we describe an interesting grid workflow in atmospheric sciences and show how it can be implemented using Web Services. An interesting attribute of our implementation technique is that the application codes can be adapted to work on the Grid without source modification | Future Generation Computer Systems, 21, 2005, pp 69 – 78. Also appeared in The International Conference on Computational Sciences, ICCS04, Krakow Poland, June 6 – 9, 2004. | 2000 |  |
188 | Lewis, A., Saario, S., Abramson, D. and Lu, J. | An Application of Optimisation for Passive RF Component Design The solution of computational electromagnetic simulations is integral to the design process. As higher performance computers become more available, the application of optimisation techniques to reduce design times becomes more feasible. This paper presents the application of Parallel BFGS and Adaptive Simulated Annealing in minimising the transmission through a ceramic bead suppressor on a straight wire transmission line. | Conference on Electromagnetic Field Computation, Milwaukee, June 4-7th 2000 | 2000 |  |
189 | De Silva, A. and Abramson, D. A. | Parallel algorithms for solving Stochastic Linear Programs Parallel and distributed computing platforms are beginning to revolutionise the traditional engineering design process. Rather than performing direct experiments, researchers are now able to simulated extremely complex systems based on numerical models. When appropriately formulated, these models allow the designer to predict the behaviour of real systems with great accuracy. However, the cost of such accuracy is a requirement for very large amounts of computing time. Until the advent of cheaply available parallel systems, very expensive super computers were required in order to use numerical simulation in this way. Because of the numeric techniques which are employed to solve the systems of equations in the models, it is often possible to achieve very high levels of performance from parallel systems. Operations research techniques have been used for many years to allow engineers to optimise some aspect of their design. As in the simulation process discussed above, mathematical models are constructed which describe the problem, and then various algorithms can be applied to solve the system. However, up to date, there has been very limited application of parallel computers to these problems. The main reason for this is that many operations research algorithms are highly sequential, and thus, it is not easy to achieve good speedups on large parallel systems. The problem with many operations research models is that the designer must make simplifications in the original real world problem in order to produce a system which can be solved in a reasonable time. One area where this is particularly relevant is in the process of incorporating uncertainty into the system. Traditionally, models are specified with exact values for parameters, and the sensitivity to those parameters is rarely explored. Stochastic programming is a technique which allows uncertainty to be incorporated in a problem's specification, and thus it is possible to produce a solution which takes account of the variance which is experienced in the real world. However, using conventional sequential computers, it is not possible to solve very large problems, because the model size becomes very large. Fortunately, some stochastic systems contain highly parallel data structures, and these can be exploited by parallel computers. Recently there has been quite a degree of research activity in this area. This chapter examines methods used for solving large linear stochastic optimisation problems using parallel computers. Because this is a relatively new field of operations research there are very few survey articles on the topic. Section 1 introduces the use of linear programming, and stochastic optimisation. The methods are illustrated through a very simple example. The advanced reader may choose to skip section 2. In section 2 we present an overview of the various techniques which can be used to solve linear stochastic programs, and we look at their implementation on parallel systems. In section 3, we present a comparison of the performance of the various methods. | Chapter 38, pp 1097 -1115, Handbook of Parallel and Distributed Computing, McGraw Hill, Ed Albert Zomaya, ISBN 0-073020-2, | 2000 |  |
190 | Buyya, R, Abramson, D., and Giddy, J. | An Economy Driven Resource Management Architecture for Global Computational Power Grids The growing computational power requirements of grand challenge applications has promoted the need for linking high-performance computational resources distributed across multiple organisations. This is fueled by the availability of the Internet as a ubiquitous commodity communication media, low cost high-performance machines such as clusters across multiple organisations, and the rise of scientific problems of multi-organisational interest. The availability of expensive, special class of scientific instruments or devices and data sources in few organisations has increased the interest in offering a remote access to these resources. The recent popularity of coupling (local and remote) computational resources, special class of scientific instruments, and data sources across the Internet for solving problems has led to the emergence of a new platform called Computational Grid. This paper identifies the issues in resource management and scheduling driven by computational economy in the emerging grid computing context. They also apply to clusters of clusters environment (known as federated clusters or hyperclusters) formed by coupling multiple (geographically distributed) clusters located in the same or different organisations. We discuss our current work on the Nimrod/G resource broker, whose scheduling mechanism is driven by a user supplied application deadline and a resource access budget. However, current Grid access frameworks do not provide the dynamic resource trading services that are required to facilitate flexible application scheduling. In order to overcome this limitation, we have proposed an infrastructure called GRid Architecture for Computational Economy (GRACE). In this paper we present the motivations for grid computing, resource management architecture, Nimrod/G resource broker, computational economy, and GRACE infrastructure and its APIs along with future work. | International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA), Las Vegas, Nevada, USA, June 26 – 29, 2000 | 2000 |  |
191 | Buyya, R., Abramson, D. and Giddy, J. | Nimrod/G: An Architecture of a Resource Management and Scheduling System in a Global Computational Grid The availability of powerful microprocessors and high-speed networks as commodity components has enabled high performance computing on distributed systems (wide-area cluster computing). In this environment, as the resources are usually distributed geographically at various levels (department, enterprise, or worldwide) there is a great challenge in integrating, coordinating and presenting them as a single resource to the user; thus forming a computational grid. Another challenge comes from the distributed ownership of resources with each resource having its own access policy, cost, and mechanism. The proposed Nimrod/G grid-enabled resource management and scheduling system builds on our earlier work on Nimrod and follows a modular and component-based architecture enabling extensibility, portability, ease of development, and interoperability of independently developed components. It uses the Globus toolkit services and can be easily extended to operate with any other emerging grid middleware services. It focuses on the management and scheduling of computations over dynamic resources scattered geographically across the Internet at department, enterprise, or global level with particular emphasis on developing scheduling schemes based on the concept of computational economy for a real test bed, namely, the Globus testbed (GUSTO). | HPC Asia 2000, May 14-17, 2000, pp 283 – 289, Beijing, China | 2000 |  |
192 | Abramson, D., Giddy, J. and Kotler, L. | High Performance Parametric Modeling with Nimrod/G: Killer Application for the Global Grid? This paper examines the role of parametric modeling as an application for the global computing grid, and explores some heuristics which make it possible to specify soft real time deadlines for larger computational experiments. We demonstrate the scheme with a case study utilizing the Globus toolkit running on the GUSTO testbed. | International Parallel and Distributed Processing Symposium (IPDPS), pp 520- 528, Cancun, Mexico, May 2000 | 2000 |  |
193 | Abramson, D. | Parametric Modelling: Killer Apps for Linux Clusters | The Linux Journal, #73, May 2000, pp 84 – 91. | 2000 |  |
194 | Abramson, D and Giddy, J. | Scheduling Large Parametric Modelling Experiments on a Distributed Meta-computer Tooltip content | PCW ’97, September 25 and 26, 1997, Australian National University, Canberra, pp P2-H-1 ‚Äì P2-H-8 | 2000 |  |
195 | Abramson, D, Lewis, A. and Peachy, T. | Nimrod/O: A Tool for Automatic Design Optimization This paper describes a novel tool called Nimrod/O that allows a user to run an arbitrary computational model as the core of a non-linear optimization process. Nimrod/O allows a user to specify the domain and type of parameters to the model, and also a specification of which output variable is to be minimized or maximized. Accordingly, a user can formulate a question like: what parameter settings will minimize the model output?. Nimrod/O currently employs a number of built-in optimization algorithms, namely BFGS, Simplex, Divide and Conquer and Simulated Annealing. Jobs can be executed on a variety of platforms, including distributed clusters and Computational Grid resources. The paper demonstrates the utility of the system with a number of case studies. | The 4th International Conference on Algorithms & Architectures for Parallel Processing (ICA3PP 2000), Hong Kong, 11 – 13 December 2000 | 2000 |  |
196 | Watson, G. and Abramson, D. | The Architecture of a Parallel Relative Debugger Relative debugging is a technique that addresses the problem of debugging programs developed using evolutionary software techniques. Recent developments allow relative debugging to be used on programs that have been ported from serial to parallel architectures. Such programs may change significantly in the porting process, and this raises many issues for the debugging process, such as how the array data and code transformations that occur in these situations can be modelled. Relative debugging can also be effectively used in situations where the parallel programs themselves have undergone evolutionary change. This paper presents the architecture of a parallel relative debugger, and reveals a number of novel and powerful techniques that have been developed for dealing with the issues of relative debugging in a parallel environment. | 13th International Conference on Parallel and Distributed Computing Systems – PDCS 2000, August 8 – 10, 2000 | 2000 |  |
197 | Watson, G., Abramson, D. | Relative Debugging for Data-Parallel Programs: A ZPL Case Study Relative debugging is a powerful paradigm that lets us locate errors in programs that result from porting or rewriting code. The authors describe their experience using relative debugging to compare a program written in a sequential language with one that was ported to the data-parallel language ZPL. | IEEE Concurrency, vol 8 issue 4, IEEE Computer Society, New York NY USA, pp. 42-52 | 2000 |  |
198 | Beasley, J.E., Krishnamoorthy, M, Sharaiha, Y.M., Abramson, D.A. | Scheduling Aircraft Landings – the Static Case In this paper, we consider the problem of scheduling aircraft (plane) landings at an airport. This problem is one of deciding a landing time for each plane such that each plane lands within a predetermined time window and that separation criteria between the landing of a plane and the landing of all successive planes are respected. We present a mixed-integer zero-one formulation of the problem for the single runway case and extend it to the multiple runway case. We strengthen the linear programming relaxations of these formulations by introducing additional constraints. Throughout, we discuss how our formulations can be used to model a number of issues (choice of objective function, precedence restrictions, restricting the number of landings in a given time period, runway workload balancing) commonly encountered in practice. The problem is solved optimally using linear programming-based tree search. We also present an effective heuristic algorithm for the problem. Computational results for both the heuristic and the optimal algorithm are presented for a number of test problems involving up to 50 planes and four runways. | Transportation Science, vol. 34, 2000, pp180-197. | 2000 |  |
199 | Abramson, D., Lowe, G. and Atkinson, P. | Are you interested in Computers and Electronics Secondary school students, when investigating tertiary study, have little opportunity to discover what a particular course has to offer and often have a poor understanding of employment options in that field. Further, many secondary schools have limited resources, and are thus unable to provide career advice in any detail. Whilst University Open Days are a good opportunity for information seeking, we often experience parents driving the direction of the students choice. In 1997 a program was initiated aimed at informing students about the Bachelor of Digital Systems at Monash University as a course option. Because the degree involves both digital hardware and computing, a project that offered both disciplines in a non-trivial task was desirable. The development of a computer controlled house in which electronic appliances could be monitored and operated via a computer was proposed. This system is known as the Smart House. In this paper we will discuss the concept of the Smart House with an overview of the equipment. Discussion of activities in the workshops, and feedback from participants, is also covered. One of the major outcomes mentioned by participants was that the construction of hardware and software was both fun and new, and lead to a real sense of achievement. | Fourth Australasian Computing Education Conference, 4 – 6 December 2000 | 2000 |  |
200 | Abramson, D., Power, K., and Sosic, R. | Simulating Computer Networks using Clusters of PCs Abstract Not Available | HPC-TelePar’99 at the 1999 Advanced Simulation Technologies Conference (ASTC’99), April 11-15, 1999, San Diego, California,USA | 1999 |  |
201 | Abramson, D. A. and Randall, M. | A Simulated Annealing code for General Integer Linear Programs Abstract Not Available | Annals of Operations Research 86(1999) | 1999 |  |
202 | Smith, K. A, Abramson, D. and Duke, D. | Efficient Timetabling Formulations for Hopfield Neural Networks Abstract Not Available | Artificial Neural Networks in Engineering Conference, 1999 (ANNIE’99). | 1999 |  |
203 | Randall, M. and Abramson, D. | A General Parallel TABU Search Algorithm for Combinatorial Optimisation Problems Abstract Not Available | Proceedings of 1999 Parallel and Real Time Conference (PART-99), December 1999, Melbourne, pp 68 – 79. | 1999 |  |
204 | Giles, S. and Abramson, D. | The Virtual Tea Room: Integrating Video into Everyday Life Abstract Not Available | International Wireless and Telecommunications Symposium, Malaysia, 17-21st May, 1999 | 1999 |  |
205 | Abramson, D.A. , Dang, H. and Krisnamoorthy, M. | Simulated Annealing Cooling Schedules for the School Timetabling Problem Abstract Not Available | Asia-Pacific Journal of Operational Research, 16 (1999) 1-22. | 1999 |  |
206 | Giles, S. and Abramson, D. | The Video Wall Project – Video Technology at the Leading Edge in Education Abstract Not Available | Learning Technologies ’99, 20 – 23 October, 1999 Noosa, Queensland. | 1999 |  |
207 | Postula, A., Chen, S., Jozwiak, L and Abramson, D. | Automated Synthesis of Interleaved Memory Systems for Custom Computing Machines Abstract Not Available | Workshop on | 1998 |  |
208 | Abramson, D. Logothetis, P, Postula, A., Randall, M. | FPGA Based Custom Computing Machines for Irregular Problems Abstract Not Available | Fourth International Symposium on High-Performance Computer Architecture, (HPCA98), February 1-4, 1998, Las Vegas, Nevada | 1998 |  |
209 | Smith, K. A., Abramson, D. A., Duke, D. | Hopfield Neural Networks for Timetabling: Formulations, Methods and Comparative Results Abstract Not Available | in Business Systems Research Vol I, School of Business Systems, Clayton Vic Australia, pp. 196-224. | 1998 |  |
210 | De Silva, A. and Abramson, D. A. | A Parallel Interior Point Method and its Application to Facility Location Problems Abstract Not Available | Computational Optimization and Applications, Volume 9, Number 3, March 98, pp 249 – 273 | 1998 |  |
211 | Postula, A., Abramson, D., Ziping Fang and Logothetis, P. | A Comparison of High Level Synthesis and Register Transfer Level Design Techniques for Custom Computing Machines Abstract Not Available | Configware Minitrack in the Software Technology Track of the Thirty-First Hawaii International Conference on System Sciences (HICSS-31), Hawaii, Jan 1998. | 1998 |  |
212 | Abramson, D, Logothetis, P., Randall, M and Postula, A. | A Tail of 2n cities: Performing Combinatorial Optimisation using Linked Lists on Special Purpose Computers Abstract Not Available | The International Conference on Computational Intelligence and Multimedia Applications – 1998 (ICCIMA’98), Gippsland, Victoria, Australia, 9-11 February 1998, pp 17 ‚Äì 26. | 1998 |  |
213 | Abramson, D., Smith, K., Logothetis, P. and Duke, D. | FPGA Based Implementation of a Hopfield Neural Network for Solving Constraint Satisfaction Problems Abstract Not Available | Workshop on Computational Intelligence of the 24th Euromicro Conference, Västerås, Sweden, August 25th-27th, 1998 | 1998 |  |
214 | Abramson, D, Logothetis, P., Randall, M and Postula, A. | Application Specific Computers for Combinatorial Optimisation Abstract Not Available | The Australian Computer Architecture Workshop, Sydney, Feb 1997, Springer-Verlag Singapore Pty. Ltd., Singapore, pp 29 – 43. | 1997 |  |
215 | Sosic, R., Abramson, D. A. | Guard: a relative debugger Abstract Not Available | Software – Practice & Experience, vol 27, John Wiley & Sons, Ltd, California USA, pp. 185-206. | 1997 |  |
216 | Abramson, D and Watson, G. | Relative Debugging for Parallel Systems Abstract Not Available | PCW ’97, September 25 and 26, 1997, Australian National University, Canberra, pp P1-C-1 ‚Äì P1-C-8. | 1997 |  |
217 | D. Abramson, I. Foster, J. Giddy, A. Lewis, R. Sosic, R. Sutherst, N. White | The Nimrod Computational Workbench: A Case Study in Desktop Metacomputing Abstract Not Available | Australian Computer Science Conference (ACSC 97), Macquarie University, Sydney, Feb 1997, pp 17 – 26 | 1997 |  |
218 | Lewis, A, Abramson, D and Simpson, R. | Parallel non-linear optimization : Towards the design of a decision support system for air quality management Abstract Not Available | IEEE Supercomputing 97, San Jose, 1997. | 1997 |  |
219 | Abramson D., Foster, I., Michalakes, J. and Sosic R. | Relative Debugging: A new paradigm for debugging scientific applications Abstract Not Available | the Communications of the Association for Computing Machinery (CACM), Vol 39, No 11, pp 67 – 77, Nov 1996 | 1996 |  |
220 | Abramson, D., Dang, H. and Krishnamoorthy, M. | A Comparison of Two Methods for Solving 0-1 Integer Programs Using a General Purpose Simulated Annealing Abstract Not Available | Annals of Operations Research, 63 (1996), pp 129 – 150 | 1996 |  |
221 | Abramson, D.A. and Sosic, R. | A Debugging and Testing Tool for Supporting Software Evolution Abstract Not Available | Journal of Automated Software Engineering, 3 (1996), pp 369 – 390 | 1996 |  |
222 | Abramson, D.A., Sosic, R. and Watson, G. | Implementation Techniques for a Parallel Relative Debugger Abstract Not Available | International Conference on Parallel Architectures and Compilation Techniques – PACT ’96, October 20-23, 1996, Boston, Massachusetts, USA | 1996 |  |
223 | Postula, A., Abramson, D. and Logothetis, P. | The Design of a Specialised Processor for the Simulation of Sintering Abstract Not Available | Proceedings of the 22nd Euromicro Conference, September 2-5, 1996, Prague, Czech Republic. | 1996 |  |
224 | Postula, A., Abramson, D. and Logothetis, P. | Synthesis for Prototyping of Application Specific Processors Abstract Not Available | Invited Key Note Talk, 3rd Asia Pacific Conference on HDL (APCHDL-96), Jan 8-10, 1996, Bangalore, India. | 1996 |  |
225 | McKay, A. and Abramson, D. | Evaluating the Performance of a SISAL implementation of the Abingdon Cross Image Processing Benchmark Abstract Not Available | International Journal of Parallel Programming, Vol 33, Number 2, pp 105 – 134, 1995. | 1995 |  |
226 | Abramson, D.A. and Sosic, R. | A Debugging Tool for Software Evolution Abstract Not Available | CASE-95, 7th International Workshop on Computer-Aided Software Engineering, Toronto, Ontario, Canada, July 1995, pp 206 – 214. Also appeared in proceedings of 2nd Working Conference on Reverse Engineering, Toronto, Ontario, Canada, July 1995 | 1995 |  |
227 | Lewis, A., Abramson D., Sosic R., Giddy J. | Tool-based Parameterisation : An Application Perspective Abstract Not Available | Computational Techniques and Applications Conference, Melbourne, July 1995 | 1995 |  |
228 | Abramson, D., Cameron, G. | DCompose: A Tool for Measuring Data Decomposition on Distributed Memory Multiprocessors Abstract Not Available | Current and Future Trends in Parallel and Distributed Computing, Ed Albert Zomaya, Thomson Computer Press, Chapter 15, pp 411 – 433, ISBN 1-85032-188-4, 1995 | 1995 |  |
229 | Abramson, D.A. and Sosic, R. | Relative Debugging using Multiple Program Versions Abstract Not Available | Key Note Address, 8th Int. Symp. on Languages for Intensional Programming , Sydney, 3-5th May, 1995. In Intensional Programming I, World Scientific, ISBN 981 – 02 – 2400 – 1. | 1995 |  |
230 | Abramson D., Sosic R., Giddy J. and Hall B. | Nimrod: A Tool for Performing Parametised Simulations using Distributed Workstations Abstract Not Available | The 4th IEEE Symposium on High Performance Distributed Computing, Virginia, August 1995, pp 112-121 | 1995 |  |
231 | Abramson, D, de Silva, A, Randall, M and Postula, A. | Special Purpose Computer Architectures for High Speed Optimisation Abstract Not Available | Parallel and Real Time Computing conference (PART-95), pp 13 – 20, September, 1995, Perth | 1995 |  |
232 | Abramson D., Foster, I., Michalakes, J. and Sosic R. | Relative Debugging and its Application to the Development of Large Numerical Models Abstract Not Available | Proceedings of IEEE Supercomputing 1995, San Diego, December 95. Selected as best paper | 1995 |  |
233 | Wail, S. and Abramson, D.A. | Can Data-flow Machines be programmed with an Imperative Language? Abstract Not Available | Appearing in | 1995 |  |
234 | Abramson, D.A. , Dang, H. and Krisnamoorthy, M. | Cooling Schedules for Simulated Annealing Based Scheduling Algorithms Abstract Not Available | Proceeding of the 17th Australian Computer Science Conference, pp 541 – 550. Univ of Caterbury, Christchurch, NZ, Jan 1994 | 1994 |  |
235 | Abramson, D., Mills, G. and Perkins, S. | Parallelisation of a Genetic Algorithm for the Computation of Efficient Train Schedules Abstract Not Available | Proceedings of 1993 Parallel Computing and Transputers Conference, Brisbane, pp 139 – 149 Nov 1993, IOS Press | 1994 |  |
236 | Abramson , D., Sosic , R., Giddy , J., Cope , M. | The Laboratory Bench: Distributed Computing for Parametised Simulations Abstract Not Available | 1994 Parallel Computing and Transputers Conference, Wollongong, Nov 94, pp 17 – 27. | 1994 |  |
237 | Abramson, D. | Predicting the Performance of Scientific Applications on Distributed Memory Multiprocessors Abstract Not Available | The IEEE 1994 Scalable High Performance Computing Conference. Knoxville Tennessee, pp 285 – 292, May 23-25 1994. | 1994 |  |
238 | Abramson, D. | Method for Testing and Debugging Computer Programs Abstract Not Available | Australian Provisional Patent, PM5196/94. US Patent Number 5,838,975. | 1994 |  |
239 | Abramson, D.A., Cope, M. and McKenzie, R. | Modelling Photochemical Pollution using Parallel and Distributed Computing Platforms Abstract Not Available | Proceedings of PARLE-94, pp 478 – 489, Athens, Greece, July 1994. | 1994 |  |
240 | Abela, J., Abramson, D., Krishnamoorthy, M. De Silva, A and Mills, G. | Computing Optimal Schedules for Landing Aircraft Abstract Not Available | The 12th National Conference of the Australian Society for Operations Research. Adelaide, July 7-9, 1993, pp 71 – 90 | 1993 |  |
241 | Jones, D., Hulskamp, J. and Abramson, D. | BABBAGE: A tool to facilitate the implementation of master-slave grid oriented applications on distributed computer systems Abstract Not Available | Proceedings of 1993 Parallel Computing and Transputers Conference, Brisbane, pp 350 – 359, Nov 1993. | 1993 |  |
242 | Abramson, D., Dang, H. and Krishnamoorthy, M. | Enhanced Simulated Annealing Through Linear Programming Preprocessing Abstract Not Available | The 12th National Conference of the Australian Society for Operations Research. Adelaide, July 7-9, 1993, pp 91- 114. | 1993 |  |
243 | Rotstayn, L., Francis, R., Abramson, D. and Dix, M. | Suitability of GCM physics for execution on SIMD parallel computers Abstract Not Available | J. Meteor. Soc. Japan, Vol 71,1993 . pp 297-303. | 1993 |  |
244 | Abramson, D.A. and Rosenberg, J. | Hardware Support for Program Debuggers in a Paged Virtual Memory Abstract Not Available | Computer Architecture News, June 1983, Vol 11, No 2, pp 8-19. | 1993 |  |
245 | Abramson, D. A. | High Performance Application Specific Architectures Abstract Not Available | Proceedings of 26th Hawaii International Conference on System Sciences, Kauai, Hawaii, pp 92 – 95, Jan 1993 | 1993 |  |
246 | De Silva, A. and Abramson, D. | Computational Experience with the Parallel Progressive Hedging Algorithm for Stochastic Linear Programs Abstract Not Available | Proceedings of 1993 Parallel Computing and Transputers Conference, pp 164 – 174, Brisbane, Nov 1993 | 1993 |  |
247 | Abramson, D., Cameron, G., Dix, M. and Makies, M. | STORM: A Bus Based Shared Memory Multiprocessor for Climate Modelling Abstract Not Available | Proceedings of 26th Hawaii International Conference on System Sciences, Kauai, Hawaii, pp 96 – 105, Jan 1993. | 1993 |  |
248 | Jones, D., Hulskamp, J. and Abramson, D. | Tools to facilitate the implementation of grid based finite difference algorithms on distributed computer systems Abstract Not Available | Proceedings of World Transputer Conference, Aachen, 20-22 September, Germany, 1993 | 1993 |  |
249 | Rosenberg, J. Keedy, J. L. and Abramson, D. | Addressing Mechanisms for Large virtual Memories Abstract Not Available | The Computer Journal, August 1992. | 1992 |  |
250 | Rawling, M., Francis, R. and Abramson, D. | Performance Bounds for the Conservative Parallel Discrete Event Simulation of VLSI Circuits and Systems Abstract Not Available | 15th Australian Computer Science Conference, Hobart, Jan 1992, pp 753 – 767. | 1992 |  |
251 | Abramson, D. A. and Abela, J. | A Parallel Genetic Algorithm for Solving the School Timetabling Problem Abstract Not Available | IJCAI workshop on Parallel Processing in AI, Sydney, August 91. Also appearing in 15 Australian Computer Science Conference, Hobart, Feb 1992, pp 1 – 11. | 1992 |  |
252 | Rawling, M., Francis, R. and Abramson, D. | Potential Performance of Parallel Conservative Simulation of VLSI Circuits and Systems Abstract Not Available | IEEE/ACM 25th Annual Simulation Symposium, pp 71 – 80. 1992. | 1992 |  |
253 | Abramson, D.A. | A Very High Speed Architecture to Support Simulated Annealing Abstract Not Available | IEEE Computer, May 1992, pp 27 – 34. | 1992 |  |
254 | Abramson, D.A. and Egan, G. | Design Considerations for a High Performance Dataflow Multiprocessor Abstract Not Available | Chapter 4, | 1991 |  |
255 | Abramson, D.A. and Freidin, J. | A Parallel Router for Printed Circuit Boards Abstract Not Available | 24th Hawaii International Conference on System Sciences, pp 164 – 171, Jan 1991. | 1991 |  |
256 | Abramson, D., Francis, R. and Dix, M. | A Retargettable Programming Environment for Studying Climate Models, Computational Techniques and Applications (CTAC-91), pp 91 – 100, July 1991. Abstract Not Available | | 1991 |  |
257 | Abramson, D.A., Dix, M., Whiting, P. | A Study of the Shallow Water Equations on Various Parallel Architectures Abstract Not Available | 14th Australian Computer Science Conference, pp 06-1 – 06-12, Sydney, 1991 | 1991 |  |
258 | Francis, R., Abramson, D., Dix, M. and Rotstayn, L. | SIMD Climate Modeling Abstract Not Available | Parallel Computing 91, pp 471 – 482, London, September 1991. | 1991 |  |
259 | McKay, A and Abramson, D. | Using SISAL to implement the Abingdon Cross Image Processing Benchmark Abstract Not Available | 4th Australian SuperComputing Conference, Gold Coast, Dec 1991, pp 107 – 116. | 1991 |  |
260 | Abramson, D. A. | Constructing School Timetables using Simulated Annealing: Sequential and Parallel Algorithms Abstract Not Available | Management Science, Vol 37, No 1, Jan 1991, pp 98 – 113 | 1991 |  |
261 | Jones, D, Abramson, D.A. and Hulskamp, J. | Porting Parallel Programs from Shared Memory Multiprocessors to Transputer Based Message Passing Systems Abstract Not Available | 1991 Australian Transputer and OCCAM users group, Canberra, September 1991. | 1991 |  |
262 | Abramson, D.A., et al, | Super Computing Applications in the CSIRO-DIT High Performance Computation Project Abstract Not Available | 3rd Australian Supercomputing Conference, Melbourne, Dec 1990 | 1990 |  |
263 | Abramson, D and Egan, G, | The RMIT Dataflow Computer: A Hybrid Architecture Abstract Not Available | The Computer Journal, Vol 33, No 3, June 1990, pp 230 – 240. | 1990 |  |
264 | Abramson, D. | An Apparatus for use in Producing a Timetable Abstract Not Available | Australian Provisional Patent, PK0925/90. | 1990 |  |
265 | Abramson, D.A. | Case Studies in Parallel Processing Abstract Not Available | Proceedings of 1989 International Conference on Computation, Applications and Techniques (CTAC 89) Brisbane, 1989, Hemisphere Publishing. | 1989 |  |
266 | Abramson, D.A., Ramamohanarao, K., and Ross, M. | A Scalable Cache Coherence Algorithm using a Selectively Clearable Cache Memory Abstract Not Available | Australian Computer Journal, Vol 21, No 1, Feb 1989. | 1989 |  |
267 | Abramson, D. A. | Using a dataflow computer for functional logic simulation Abstract Not Available | Third International Conference on Supercomputing, Boston, Massachusetts, May 15-20, 1988. | 1988 |  |
268 | Abramson, D, and Egan, G. | An Overview of the RMIT/CSIRO Parallel Systems Architecture Project Abstract Not Available | Australian Computer Journal, Vol 20, No 3, August 1988. | 1988 |  |
269 | Abramson, D. and Rosenberg, J. | The Micro-Architecture of a Capability Based Computer Abstract Not Available | Proceedings of ACM 19th International Conference on Microprogramming, New York, 1986. | 1986 |  |
270 | Abramson, D. and Rosenberg, J. | Tools for Microcode development in a Capability based computer Abstract Not Available | Proceedings of ACM 19th International Conference on Microprogramming, New York, 1986. | 1986 |  |
271 | Yap, R., Rosenberg, J. and Abramson, D.A. | A C compilerfor a Capability Based Computer Abstract Not Available | Proceedings of the 9th Australian Computer Sciences Conference, Canberra Jan 1985, pp 23 – 34. | 1985 |  |
272 | Abramson, D.A. and Keedy, J.L. | Implementing a Large Virtual Memory in a Distributed Computer System Abstract Not Available | Proceedings of Eighteenth Annual Hawaii International Conference on System Sciences, January 2-4, 1985. | 1985 |  |
273 | Rosenberg J and Abramson D.A.. | MONADS PC – A Capability-Based Workstation to Support Software Engineering Abstract Not Available | Proceedings of Eighteenth Annual Hawaii International Conference on System Sciences, January 2-4, 1985. (Selected as Best paper in Hardware track) | 1985 |  |
274 | Rosenberg, J. and Abramson, D.A. | The MONADS Architecture: Motivation and Implementation Abstract Not Available | Proceedings of First Pan Pacific Computer Conference, Melbourne, 1985. Invited Paper. | 1985 |  |
275 | Abramson, D.A. and Rosenberg, J. | Supporting a Capability-based Architecture in Silicon Abstract Not Available | 4th Australian Micro-electronics conference, Sydney, May 1985, pp 43 – 50. | 1985 |  |
276 | Abramson, D.A. and Rosenberg, J. | A Vertical User Interface to Horizontal Microcode via a Retargetable Microassembler Abstract Not Available | Proceedings of 8th Australian Computer Science Conference, Feb 1985, Melbourne, Australia, pp 8-1 – 8-10. | 1985 |  |
277 | Abramson, D.A. | A Microcode Emulator for Undergraduate Teaching Abstract Not Available | Proc. 7th Australian Sciences Conference, Adelaide, Australia, Feb 1984, pp 3-1 – 3-10, . | 1984 |  |
278 | Abramson, D.A. | The MONADS II Computer System, Abstract Not Available | Proc. 6th Australian Sciences Conference, Sydney, Februrary 1983, pp 1 – 10. | 1983 |  |
279 | Abramson, D.A. | Hardware for Capability Based Addressing Abstract Not Available | Proc. 9th Australian Computer Conference, Hobart, pp 101-115, August, 1982. | 1982 |  |
280 | Keedy, J.L., Abramson D., Rosenberg, J. and Rowe, D.M. | A Comparison of the MONADS II and III Computer Systems Abstract Not Available | Proc. 9th Australian Computer Conference, Hobart, August 1982, pp 581 – 58 . | 1982 |  |
281 | Keedy, J.L., Abramson, D., Rosenberg, J. and Rowe, D.M. | The MONADS Project Stage 2: Hardware Designed to Support Software Engineering Techniques Abstract Not Available | Proc. 9th Australian Computer Conference, Hobart, August 1982, pp 575 – 580, . | 1982 |  |
282 | Abramson, D.A. | A Technique for Enhancing Processor Architecture Abstract Not Available | Proc. 5th Australian Computer Science Conference | 1982 |  |
283 | Abramson, D.A. | Hardware Management of a Large Virtual Memory, Proc. 4th Australian Computer Science Conference Abstract Not Available | Brisbane (Australian Computer Science Communications 3, 1, pp. 1-13). | 1981 |  |