The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
December 2018
MTWTFSS
     12
3456789
10111213141516
17181920212223
24252627282930
31      

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "Design and analysis of microarray and other experi"

Associate Professor Sanjeeva Balasuriya
Senior Lecturer in Applied Mathematics


More about Sanjeeva Balasuriya...
Associate Professor Nicholas Buchdahl
Reader in Pure Mathematics


More about Nicholas Buchdahl...
Associate Professor Gary Glonek
Associate Professor in Statistics


More about Gary Glonek...
Associate Professor Inge Koch
Associate Professor in Statistics


More about Inge Koch...
Professor Finnur Larusson
Associate Professor in Pure Mathematics


More about Finnur Larusson...
Professor Patty Solomon
Professor of Statistical Bioinformatics


More about Patty Solomon...
Dr Simon Tuke
Lecturer in Statistics


More about Simon Tuke...

Courses matching "Design and analysis of microarray and other experi"

Analysis of multivariable and high dimensional data

Multivariate analysis of data is performed with the aims to 1. understand the structure in data and summarise the data in simpler ways; 2. understand the relationship of one part of the data to another part; and 3. make decisions or draw inferences based on data. The statistical analyses of multivariate data extend those of univariate data, and in doing so require more advanced mathematical theory and computational techniques. The course begins with a discussion of the three classical methods Principal Component Analysis, Canonical Correlation Analysis and Discriminant Analysis which correspond to the aims above. We also learn about Cluster Analysis, Factor Analysis and newer methods including Independent Component Analysis. For most real data the underlying distribution is not known, but if the assumptions of multivariate normality of the data hold, extra properties can be derived. Our treatment combines ideas, theoretical properties and a strong computational component for each of the different methods we discuss. For the computational part -- with Matlab -- we make use of real data and learn the use of simulations in order to assess the performance of different methods in practice. Topics covered: 1. Introduction to multivariate data, the multivariate normal distribution 2. Principal Component Analysis, theory and practice 3. Canonical Correlation Analysis, theory and practice 4. Discriminant Analysis, Fisher's LDA, linear and quadratic DA 5. Cluster Analysis: hierarchical and k-means methods 6. Factor Analysis and latent variables 7. Independent Component Analysis including an Introduction to Information Theory The course will be based on my forthcoming monograph Analysis of Multivariate and High-Dimensional Data - Theory and Practice, to be published by Cambridge University Press.

More about this course...

Complex Analysis III

When the real numbers are replaced by the complex numbers in the definition of the derivative of a function, the resulting (complex-)differentiable functions turn out to have many remarkable properties not enjoyed by their real analogues. These functions, usually known as holomorphic functions, have numerous applications in areas such as engineering, physics, differential equations and number theory, to name just a few. The focus of this course is on the study of holomorphic functions and their most important basic properties. Topics covered are: Complex numbers and functions; complex limits and differentiability; elementary examples; analytic functions; complex line integrals; Cauchy's theorem and the Cauchy integral formula; Taylor's theorem; zeros of holomorphic functions; Rouche's Theorem; the Open Mapping theorem and Inverse Function theorem; Schwarz' Lemma; automorphisms of the ball, the plane and the Riemann sphere; isolated singularities and their classification; Laurent series; the Residue Theorem; calculation of definite integrals and evaluation of infinite series using residues; outlines of the Jordan Curve Theorem, Montel's Theorem and the Riemann Mapping Theorem.

More about this course...

Integration and Analysis III

The Riemann integral works well for continuous functions on closed bounded intervals, but it has certain deficiencies that cause problems, for example, in Fourier analysis and in the theory of differential equations. To overcome such deficiencies, a "new and improved" version of the integral was developed around the beginning of the twentieth century, and it is this theory with which this course is concerned. The underlying basis of the theory, measure theory, has important applications not just in analysis but also in the modern theory of probability. Topics covered are: Set theory; Lebesgue outer measure; measurable sets; measurable functions. Integration of measurable functions over measurable sets. Convergence of sequences of functions and their integrals. General measure spaces and product measures. Fubini and Tonelli's theorems. Lp spaces. The Radon-Nikodym theorem. The Riesz representation theorem. Integration and Differentiation.

More about this course...

Real Analysis

Modern mathematics and physics rely on our ability to be able to solve equations, if not in explicit exact forms, then at least in being able to establish the existence of solutions. To do this requires a knowledge of so-called ``analysis", which in many respects is just Calculus in very general settings. The foundations for this work are commenced in Real Analysis, a course that develops this basic material in a systematic and rigorous manner in the context of real-valued functions of a real variable. Topics covered are: Basic set theory. The real numbers, least upper bounds, completeness and its consequences. Sequences: convergence, subsequences, Cauchy sequences. Open, closed, and compact sets of real numbers. Continuous functions, uniform continuity. Differentiation, the Mean Value Theorem. Sequences and series of functions, pointwise and uniform convergence. Power series and Taylor series. Metric spaces: basic notions generalised from the setting of the real numbers. The space of continuous functions on a compact interval. The Contraction Principle. Picard's Theorem on the existence and uniqueness of solutions of ordinary differential equations.

More about this course...

Statistical Analysis and Modelling 1

This is a first course in Statistics for mathematically inclined students. It will address the key principles underlying commonly used statistical methods such as confidence intervals, hypothesis tests, inference for means and proportions, and linear regression. It will develop a deeper mathematical understanding of these ideas, many of which will be familiar from studies in secondary school. The application of basic and more advanced statistical methods will be illustrated on a range of problems from areas such as medicine, science, technology, government, commerce and manufacturing. The use of the statistical package SPSS will be developed through a sequence of computer practicals. Topics covered will include: basic probability and random variables, fundamental distributions, inference for means and proportions, comparison of independent and paired samples, simple linear regression, diagnostics and model checking, multiple linear regression, simple factorial models, models with factors and continuous predictors.

More about this course...

Topics in analysis

Description TBA

More about this course...

Topology and Analysis III

Solving equations is a crucial aspect of working in mathematics, physics, engineering, and many other fields. These equations might be straightforward algebraic statements, or complicated systems of differential equations, but there are some fundamental questions common to all of these settings: does a solution exist? If so, is it unique? And if we know of the existence of some specific solution, how do we determine it explicitly or as accurately as possible? This course develops the foundations required to rigorously establish the existence of solutions to various equations, thereby laying the basis for the study of solutions. Through an understanding of the foundations of analysis, we obtain insight critical in numerous areas of application, such areas ranging across physics, engineering, economics and finance. Topics covered are: sets, functions, metric spaces and normed linear spaces, compactness, connectedness, and completeness. Banach fixed point theorem and applications, uniform continuity and convergence. General topological spaces, generating topologies, topological invariants, quotient spaces. Introduction to Hilbert spaces and bounded operators on Hilbert spaces.

More about this course...

Events matching "Design and analysis of microarray and other experi"

Stability of time-periodic flows
15:10 Fri 10 Mar, 2006 :: G08 Mathematics Building University of Adelaide :: Prof. Andrew Bassom, School of Mathematics and Statistics, University of Western Australia

Time-periodic shear layers occur naturally in a wide range of applications from engineering to physiology. Transition to turbulence in such flows is of practical interest and there have been several papers dealing with the stability of flows composed of a steady component plus an oscillatory part with zero mean. In such flows a possible instability mechanism is associated with the mean component so that the stability of the flow can be examined using some sort of perturbation-type analysis. This strategy fails when the mean part of the flow is small compared with the oscillatory component which, of course, includes the case when the mean part is precisely zero.

This difficulty with analytical studies has meant that the stability of purely oscillatory flows has relied on various numerical methods. Until very recently such techniques have only ever predicted that the flow is stable, even though experiments suggest that they do become unstable at high enough speeds. In this talk I shall expand on this discrepancy with emphasis on the particular case of the so-called flat Stokes layer. This flow, which is generated in a deep layer of incompressible fluid lying above a flat plate which is oscillated in its own plane, represents one of the few exact solutions of the Navier-Stokes equations. We show theoretically that the flow does become unstable to waves which propagate relative to the basic motion although the theory predicts that this occurs much later than has been found in experiments. Reasons for this discrepancy are examined by reference to calculations for oscillatory flows in pipes and channels. Finally, we propose some new experiments that might reduce this disagreement between the theoretical predictions of instability and practical realisations of breakdown in oscillatory flows.
Homological algebra and applications - a historical survey
15:10 Fri 19 May, 2006 :: G08 Mathematics Building University of Adelaide :: Prof. Amnon Neeman

Homological algebra is a curious branch of mathematics; it is a powerful tool which has been used in many diverse places, without any clear understanding why it should be so useful. We will give a list of applications, proceeding chronologically: first to topology, then to complex analysis, then to algebraic geometry, then to commutative algebra and finally (if we have time) to non-commutative algebra. At the end of the talk I hope to be able to say something about the part of homological algebra on which I have worked, and its applications. That part is derived categories.
A Bivariate Zero-inflated Poisson Regression Model and application to some Dental Epidemiological data
14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul

Data in the form of paired (pre-treatment, post-treatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zero-inflated bivariate Poisson regression (ZIBPR) model for the paired (pre-treatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zero-inflated Poisson regression (ZIPR) model of the post-treatment count with the pre-treatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zero-inflated Poisson regression model in which the pre-treatment DMFT index is taken to be a covariate
Identifying the source of photographic images by analysis of JPEG quantization artifacts
15:10 Fri 27 Apr, 2007 :: G08 Mathematics Building University of Adelaide :: Dr Matthew Sorell

Media...
In a forensic context, digital photographs are becoming more common as sources of evidence in criminal and civil matters. Questions that arise include identifying the make and model of a camera to assist in the gathering of physical evidence; matching photographs to a particular camera through the camera’s unique characteristics; and determining the integrity of a digital image, including whether the image contains steganographic information. From a digital file perspective, there is also the question of whether metadata has been deliberately modified to mislead the investigator, and in the case of multiple images, whether a timeline can be established from the various timestamps within the file, imposed by the operating system or determined by other image characteristics. This talk is concerned specifically with techniques to identify the make, model series and particular source camera model given a digital image. We exploit particular characteristics of the camera’s JPEG coder to demonstrate that such identification is possible, and that even when an image has subsequently been re-processed, there are often sufficient residual characteristics of the original coding to at least narrow down the possible camera models of interest.
Finite Geometries: Classical Problems and Recent Developments
15:10 Fri 20 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Joseph A. Thas :: Ghent University, Belgium

In recent years there has been an increasing interest in finite projective spaces, and important applications to practical topics such as coding theory, cryptography and design of experiments have made the field even more attractive. In my talk some classical problems and recent developments will be discussed. First I will mention Segre's celebrated theorem and ovals and a purely combinatorial characterization of Hermitian curves in the projective plane over a finite field here, from the beginning, the considered pointset is contained in the projective plane over a finite field. Next, a recent elegant result on semiovals in PG(2,q), due to Gács, will be given. A second approach is where the object is described as an incidence structure satisfying certain properties; here the geometry is not a priori embedded in a projective space. This will be illustrated by a characterization of the classical inversive plane in the odd case. Another quite recent beautiful result in Galois geometry is the discovery of an infinite class of hemisystems of the Hermitian variety in PG(3,q^2), leading to new interesting classes of incidence structures, graphs and codes; before this result, just one example for GF(9), due to Segre, was known.
The Linear Algebra of Internet Search Engines
15:10 Fri 5 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Lesley Ward :: School of Mathematics and Statistics, University of South Australia

We often want to search the web for information on a given topic. Early web-search algorithms worked by counting up the number of times the words in a query topic appeared on each webpage. If the topic words appeared often on a given page, that page was ranked highly as a source of information on that topic. More recent algorithms rely on Link Analysis. People make judgments about how useful a given page is for a given topic, and they express these judgments through the hyperlinks they choose to put on their own webpages. Link-analysis algorithms aim to mine the collective wisdom encoded in the resulting network of links. I will discuss the linear algebra that forms the common underpinning of three link-analysis algorithms for web search. I will also present some work on refining one such algorithm, Kleinberg's HITS algorithm. This is joint work with Joel Miller, Greg Rae, Fred Schaefer, Ayman Farahat, Tom LoFaro, Tracy Powell, Estelle Basor, and Kent Morrison. It originated in a Mathematics Clinic project at Harvey Mudd College.
Add one part chaos, one part topology, and stir well...
13:10 Fri 19 Oct, 2007 :: Engineering North 132 :: Dr Matt Finn :: School of Mathematical Sciences

Media...
Stirring and mixing of fluids occurs everywhere, from adding milk to a cup of coffee, right through to industrial-scale chemical blending. So why stir in the first place? Is it possible to do it badly? And how can you make sure you do it effectively? I will attempt to answer these questions using a few thought experiments, some dynamical systems theory and a little topology.
Similarity solutions for surface-tension driven flows
15:10 Fri 14 Mar, 2008 :: LG29 Napier Building University of Adelaide :: Prof John Lister :: Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK

The breakup of a mass of fluid into drops is a ubiquitous phenomenon in daily life, the natural environment and technology, with common examples including a dripping tap, ocean spray and ink-jet printing. It is a feature of many generic industrial processes such as spraying, emulsification, aeration, mixing and atomisation, and is an undesirable feature in coating and fibre spinning. Surface-tension driven pinch-off and the subsequent recoil are examples of finite-time singularities in which the interfacial curvature becomes infinite at the point of disconnection. As a result, the flow near the point of disconnection becomes self-similar and independent of initial and far-field conditions. Similarity solutions will be presented for the cases of inviscid and very viscous flow, along with comparison to experiments. In each case, a boundary-integral representation can be used both to examine the time-dependent behaviour and as the basis of a modified Newton scheme for direct solution of the similarity equations.
Computational Methods for Phase Response Analysis of Circadian Clocks
15:10 Fri 18 Jul, 2008 :: G04 Napier Building University of Adelaide. :: Prof. Linda Petzold :: Dept. of Mechanical and Environmental Engineering, University of California, Santa Barbara

Circadian clocks govern daily behaviors of organisms in all kingdoms of life. In mammals, the master clock resides in the suprachiasmatic nucleus (SCN) of the hypothalamus. It is composed of thousands of neurons, each of which contains a sloppy oscillator - a molecular clock governed by a transcriptional feedback network. Via intercellular signaling, the cell population synchronizes spontaneously, forming a coherent oscillation. This multi-oscillator is then entrained to its environment by the daily light/dark cycle.

Both at the cellular and tissular levels, the most important feature of the clock is its ability not simply to keep time, but to adjust its time, or phase, to signals. We present the parametric impulse phase response curve (pIPRC), an analytical analog to the phase response curve (PRC) used experimentally. We use the pIPRC to understand both the consequences of intercellular signaling and the light entrainment process. Further, we determine which model components determine the phase response behavior of a single oscillator by using a novel model reduction technique. We reduce the number of model components while preserving the pIPRC and then incorporate the resultant model into a couple SCN tissue model. Emergent properties, including the ability of the population to synchronize spontaneously are preserved in the reduction. Finally, we present some mathematical tools for the study of synchronization in a network of coupled, noisy oscillators.

Betti's Reciprocal Theorem for Inclusion and Contact Problems
15:10 Fri 1 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University

Enrico Betti (1823-1892) is recognized in the mathematics community for his pioneering contributions to topology. An equally important contribution is his formulation of the reciprocity theorem applicable to elastic bodies that satisfy the classical equations of linear elasticity. Although James Clerk Maxwell (1831-1879) proposed a law of reciprocal displacements and rotations in 1864, the contribution of Betti is acknowledged for its underlying formal mathematical basis and generality. The purpose of this lecture is to illustrate how Betti's reciprocal theorem can be used to full advantage to develop compact analytical results for certain contact and inclusion problems in the classical theory of elasticity. Inclusion problems are encountered in number of areas in applied mechanics ranging from composite materials to geomechanics. In composite materials, the inclusion represents an inhomogeneity that is introduced to increase either the strength or the deformability characteristics of resulting material. In geomechanics, the inclusion represents a constructed material region, such as a ground anchor, that is introduced to provide load transfer from structural systems. Similarly, contact problems have applications to the modelling of the behaviour of indentors used in materials testing to the study of foundations used to distribute loads transmitted from structures. In the study of conventional problems the inclusions and the contact regions are directly loaded and this makes their analysis quite straightforward. When the interaction is induced by loads that are placed exterior to the indentor or inclusion, the direct analysis of the problem becomes inordinately complicated both in terns of formulation of the integral equations and their numerical solution. It is shown by a set of selected examples that the application of Betti's reciprocal theorem leads to the development of exact closed form solutions to what would otherwise be approximate solutions achievable only through the numerical solution of a set of coupled integral equations.
The Role of Walls in Chaotic Mixing
15:10 Fri 22 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Jean-Luc Thiffeault :: Department of Mathematics, University of Wisconsin - Madison

I will report on experiments of chaotic mixing in closed and open vessels, in which a highly viscous fluid is stirred by a moving rod. In these experiments we analyze quantitatively how the concentration field of a low-diffusivity dye relaxes towards homogeneity, and observe a slow algebraic decay, at odds with the exponential decay predicted by most previous studies. Visual observations reveal the dominant role of the vessel wall, which strongly influences the concentration field in the entire domain and causes the anomalous scaling. A simplified 1-D model supports our experimental results. Quantitative analysis of the concentration pattern leads to scalings for the distributions and the variance of the concentration field consistent with experimental and numerical results. I also discuss possible ways of avoiding the limiting role of walls.

This is joint work with Emmanuelle Gouillart, Olivier Dauchot, and Stephane Roux.

Mathematical modelling of blood flow in curved arteries
15:10 Fri 12 Sep, 2008 :: G03 Napier Building University of Adelaide :: Dr Jennifer Siggers :: Imperial College London

Atherosclerosis, characterised by plaques, is the most common arterial disease. Plaques tend to develop in regions of low mean wall shear stress, and regions where the wall shear stress changes direction during the course of the cardiac cycle. To investigate the effect of the arterial geometry and driving pressure gradient on the wall shear stress distribution we consider an idealised model of a curved artery with uniform curvature. We assume that the flow is fully-developed and seek solutions of the governing equations, finding the effect of the parameters on the flow and wall shear stress distribution. Most previous work assumes the curvature ratio is asymptotically small; however, many arteries have significant curvature (e.g. the aortic arch has curvature ratio approx 0.25), and in this work we consider in particular the effect of finite curvature.

We present an extensive analysis of curved-pipe flow driven by a steady and unsteady pressure gradients. Increasing the curvature causes the shear stress on the inside of the bend to rise, indicating that the risk of plaque development would be overestimated by considering only the weak curvature limit.

Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research
15:10 Fri 21 Nov, 2008 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development Institute

Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that under-pin SA's wild fisheries and aquaculture. SAIMOS involves the use of ship-based sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles.

The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.

Bursts and canards in a pituitary lactotroph model
15:10 Fri 6 Mar, 2009 :: Napier LG29 :: Dr Martin Wechselberger :: University of Sydney

Bursting oscillations in nerve cells have been the focus of a great deal of attention by mathematicians. These are typically studied by taking advantage of multiple time-scales in the system under study to perform a singular perturbation analysis. Bursting also occurs in hormone-secreting pituitary cells, but is characterized by fast bursts with small electrical impulses. Although the separation of time-scales is not as clear, singular perturbation analysis is still the key to understand the bursting mechanism. In particular, we will show that canards are responsible for the observed oscillatory behaviour.
Geometric analysis on the noncommutative torus
13:10 Fri 20 Mar, 2009 :: School Board Room :: Prof Jonathan Rosenberg :: University of Maryland

Noncommutative geometry (in the sense of Alain Connes) involves replacing a conventional space by a "space" in which the algebra of functions is noncommutative. The simplest truly non-trivial noncommutative manifold is the noncommutative 2-torus, whose algebra of functions is also called the irrational rotation algebra. I will discuss a number of recent results on geometric analysis on the noncommutative torus, including the study of nonlinear noncommutative elliptic PDEs (such as the noncommutative harmonic map equation) and noncommutative complex analysis (with noncommutative elliptic functions).
Sloshing in tanks of liquefied natural gas (LNG) vessels
15:10 Wed 22 Apr, 2009 :: Napier LG29 :: Prof. Frederic Dias :: ENS, Cachan

The last scientific conversation I had with Ernie Tuck was on liquid impact. As a matter of fact, we discussed the paper by J.H. Milgram, Journal of Fluid Mechanics 37 (1969), entitled "The motion of a fluid in a cylindrical container with a free surface following vertical impact." Liquid impact is a key issue in sloshing and in particular in sloshing in tanks of LNG vessels. Numerical simulations of sloshing have been performed by various groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered. Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends. The talk is organized as follows: After a brief introduction on the sloshing problem and on scaling laws, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by a finite-volume code with interface reconstruction as well as results obtained by a simple analytical model will be shown to reproduce the trends of experiments on sloshing. This is joint work with L. Brosset (GazTransport & Technigaz), J.-M. Ghidaglia (ENS Cachan) and J.-P. Braeunig (INRIA).
Wall turbulence: from the laboratory to the atmosphere
15:00 Fri 29 May, 2009 :: Napier LG29 :: Prof Ivan Marusic :: The University of Melbourne

The study of wall-bounded turbulent flows has received great attention over the past few years as a result of high Reynolds number experiments conducted in new high Reynolds number facilities such as the Princeton "superpipe", the NDF facility in Chicago and the HRNBLWT at the University of Melbourne. These experiments have brought into question the fundamental scaling laws of the turbulence and mean flow quantities as well as revealed high Reynolds number phenomena, which make extrapolation of low Reynolds number results highly questionable. In this talk these issues will be reviewed and new results from the HRNBLWT and atmospheric surface layer on the salt-flats of Utah will be presented documenting unique high Reynolds number phenomena. The implications for skin-friction drag reduction technologies and improved near-wall models for large-eddy simulation will be discussed.
Strong Predictor-Corrector Euler Methods for Stochastic Differential Equations
15:10 Fri 19 Jun, 2009 :: LG29 :: Prof. Eckhard Platen :: University of Technology, Sydney

This paper introduces a new class of numerical schemes for the pathwise approximation of solutions of stochastic differential equations (SDEs). The proposed family of strong predictor-corrector Euler methods are designed to handle scenario simulation of solutions of SDEs. It has the potential to overcome some of the numerical instabilities that are often experienced when using the explicit Euler method. This is of importance, for instance, in finance where martingale dynamics arise for solutions of SDEs with multiplicative diffusion coefficients. Numerical experiments demonstrate the improved asymptotic stability properties of the proposed symmetric predictor-corrector Euler methods.
Statistical analysis for harmonized development of systemic organs in human fetuses
11:00 Thu 17 Sep, 2009 :: School Board Room :: Prof Kanta Naito :: Shimane University

The growth processes of human babies have been studied sufficiently in scientific fields, but there have still been many issues about the developments of human fetus which are not clarified. The aim of this research is to investigate the developing process of systemic organs of human fetuses based on the data set of measurements of fetus's bodies and organs. Specifically, this talk is concerned with giving a mathematical understanding for the harmonized developments of the organs of human fetuses. The method to evaluate such harmonies is proposed by the use of the maximal dilatation appeared in the theory of quasi-conformal mapping.
The proof of the Poincare conjecture
15:10 Fri 25 Sep, 2009 :: Napier 102 :: Prof Terrence Tao :: UCLA

In a series of three papers from 2002-2003, Grigori Perelman gave a spectacular proof of the Poincare Conjecture (every smooth compact simply connected three-dimensional manifold is topologically isomorphic to a sphere), one of the most famous open problems in mathematics (and one of the seven Clay Millennium Prize Problems worth a million dollars each), by developing several new groundbreaking advances in Hamilton's theory of Ricci flow on manifolds. In this talk I describe in broad detail how the proof proceeds, and briefly discuss some of the key turning points in the argument. About the speaker: Terence Tao was born in Adelaide, Australia, in 1975. He has been a professor of mathematics at UCLA since 1999, having completed his PhD under Elias Stein at Princeton in 1996. Tao's areas of research include harmonic analysis, PDE, combinatorics, and number theory. He has received a number of awards, including the Salem Prize in 2000, the Bochner Prize in 2002, the Fields Medal and SASTRA Ramanujan Prize in 2006, and the MacArthur Fellowship and Ostrowski Prize in 2007. Terence Tao also currently holds the James and Carol Collins chair in mathematics at UCLA, and is a Fellow of the Royal Society and the Australian Academy of Sciences (Corresponding Member).
Eigen-analysis of fluid-loaded compliant panels
15:10 Wed 9 Dec, 2009 :: Santos Lecture Theatre :: Prof Tony Lucey :: Curtin University of Technology

This presentation concerns the fluid-structure interaction (FSI) that occurs between a fluid flow and an arbitrarily deforming flexible boundary considered to be a flexible panel or a compliant coating that comprises the wetted surface of a marine vehicle. We develop and deploy an approach that is a hybrid of computational and theoretical techniques. The system studied is two-dimensional and linearised disturbances are assumed. Of particular novelty in the present work is the ability of our methods to extract a full set of fluid-structure eigenmodes for systems that have strong spatial inhomogeneity in the structure of the flexible wall.

We first present the approach and some results of the system in which an ideal, zero-pressure gradient, flow interacts with a flexible plate held at both its ends. We use a combination of boundary-element and finite-difference methods to express the FSI system as a single matrix equation in the interfacial variable. This is then couched in state-space form and standard methods used to extract the system eigenvalues. It is then shown how the incorporation of spatial inhomogeneity in the stiffness of the plate can be either stabilising or destabilising. We also show that adding a further restraint within the streamwise extent of a homogeneous panel can trigger an additional type of hydroelastic instability at low flow speeds. The mechanism for the fluid-to-structure energy transfer that underpins this instability can be explained in terms of the pressure-signal phase relative to that of the wall motion and the effect on this relationship of the added wall restraint.

We then show how the ideal-flow approach can be conceptually extended to include boundary-layer effects. The flow field is now modelled by the continuity equation and the linearised perturbation momentum equation written in velocity-velocity form. The near-wall flow field is spatially discretised into rectangular elements on an Eulerian grid and a variant of the discrete-vortex method is applied. The entire fluid-structure system can again be assembled as a linear system for a single set of unknowns - the flow-field vorticity and the wall displacements - that admits the extraction of eigenvalues. We then show how stability diagrams for the fully-coupled finite flow-structure system can be assembled, in doing so identifying classes of wall-based or fluid-based and spatio-temporal wave behaviour.

Hartogs-type holomorphic extensions
13:10 Tue 15 Dec, 2009 :: School Board Room :: Prof Roman Dwilewicz :: Missouri University of Science and Technology

We will review holomorphic extension problems starting with the famous Hartogs extension theorem (1906), via Severi-Kneser-Fichera-Martinelli theorems, up to some recent (partial) results of Al Boggess (Texas A&M Univ.), Zbigniew Slodkowski (Univ. Illinois at Chicago), and the speaker. The holomorphic extension problems for holomorphic or Cauchy-Riemann functions are fundamental problems in complex analysis of several variables. The talk will be very elementary, with many figures, and accessible to graduate and even advanced undergraduate students.
A solution to the Gromov-Vaserstein problem
15:10 Fri 29 Jan, 2010 :: Engineering North N 158 Chapman Lecture Theatre :: Prof Frank Kutzschebauch :: University of Berne, Switzerland

Any matrix in $SL_n (\mathbb C)$ can be written as a product of elementary matrices using the Gauss elimination process. If instead of the field of complex numbers, the entries in the matrix are elements of a more general ring, this becomes a delicate question. In particular, rings of complex-valued functions on a space are interesting cases. A deep result of Suslin gives an affirmative answer for the polynomial ring in $m$ variables in case the size $n$ of the matrix is at least 3. In the topological category, the problem was solved by Thurston and Vaserstein. For holomorphic functions on $\mathbb C^m$, the problem was posed by Gromov in the 1980s. We report on a complete solution to Gromov's problem. A main tool is the Oka-Grauert-Gromov h-principle in complex analysis. Our main theorem can be formulated as follows: In the absence of obvious topological obstructions, the Gauss elimination process can be performed in a way that depends holomorphically on the matrix. This is joint work with Bj\"orn Ivarsson.
The fluid mechanics of gels used in tissue engineering
15:10 Fri 9 Apr, 2010 :: Santos Lecture Theatre :: Dr Edward Green :: University of Western Australia

Tissue engineering could be called 'the science of spare parts'. Although currently in its infancy, its long-term aim is to grow functional tissues and organs in vitro to replace those which have become defective through age, trauma or disease. Recent experiments have shown that mechanical interactions between cells and the materials in which they are grown have an important influence on tissue architecture, but in order to understand these effects, we first need to understand the mechanics of the gels themselves.

Many biological gels (e.g. collagen) used in tissue engineering have a fibrous microstructure which affects the way forces are transmitted through the material, and which in turn affects cell migration and other behaviours. I will present a simple continuum model of gel mechanics, based on treating the gel as a transversely isotropic viscous material. Two canonical problems are considered involving thin two-dimensional films: extensional flow, and squeezing flow of the fluid between two rigid plates. Neglecting inertia, gravity and surface tension, in each regime we can exploit the thin geometry to obtain a leading-order problem which is sufficiently tractable to allow the use of analytical methods. I discuss how these results could be exploited practically to determine the mechanical properties of real gels. If time permits, I will also talk about work currently in progress which explores the interaction between gel mechanics and cell behaviour.

Estimation of sparse Bayesian networks using a score-based approach
15:10 Fri 30 Apr, 2010 :: School Board Room :: Dr Jessica Kasza :: University of Copenhagen

The estimation of Bayesian networks given high-dimensional data sets, with more variables than there are observations, has been the focus of much recent research. These structures provide a flexible framework for the representation of the conditional independence relationships of a set of variables, and can be particularly useful in the estimation of genetic regulatory networks given gene expression data.

In this talk, I will discuss some new research on learning sparse networks, that is, networks with many conditional independence restrictions, using a score-based approach. In the case of genetic regulatory networks, such sparsity reflects the view that each gene is regulated by relatively few other genes. The presented approach allows prior information about the overall sparsity of the underlying structure to be included in the analysis, as well as the incorporation of prior knowledge about the connectivity of individual nodes within the network.

Whole genome analysis of repetitive DNA
15:10 Fri 21 May, 2010 :: Napier 209 :: Prof David Adelson :: University of Adelaide

The interspersed repeat content of mammalian genomes has been best characterized in human, mouse and cow. We carried out de novo identification of repeated elements in the equine genome and identified previously unknown elements present at low copy number. The equine genome contains typical eutherian mammal repeats. We analysed both interspersed and simple sequence repeats (SSR) genome-wide, finding that some repeat classes are spatially correlated with each other as well as with G+C content and gene density. Based on these spatial correlations, we have confirmed recently-described ancestral vs clade-specific genome territories defined by repeat content. Territories enriched for ancestral repeats tended to be contiguous domains. To determine if these territories were evolutionarily conserved, we compared these results with a similar analysis of the human genome, and observed similar ancestral repeat enriched domains. These results indicate that ancestral, evolutionarily conserved mammalian genome territories can be identified on the basis of repeat content alone. Interspersed repeats of different ages appear to be analogous to geologic strata, allowing identification of ancient vs newly remodelled regions of mammalian genomes.
Interpolation of complex data using spatio-temporal compressive sensing
13:00 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Matthew Roughan :: School of Mathematical Sciences, University of Adelaide

Many complex datasets suffer from missing data, and interpolating these missing elements is a key task in data analysis. Moreover, it is often the case that we see only a linear combination of the desired measurements, not the measurements themselves. For instance, in network management, it is easy to count the traffic on a link, but harder to measure the end-to-end flows. Additionally, typical interpolation algorithms treat either the spatial, or the temporal components of data separately, but in many real datasets have strong spatio-temporal structure that we would like to exploit in reconstructing the missing data. In this talk I will describe a novel reconstruction algorithm that exploits concepts from the growing area of compressive sensing to solve all of these problems and more. The approach works so well on Internet traffic matrices that we can obtain a reasonable reconstruction with as much as 98% of the original data missing.
The mathematics of theoretical inference in cognitive psychology
15:10 Fri 11 Jun, 2010 :: Napier LG24 :: Prof John Dunn :: University of Adelaide

The aim of psychology in general, and of cognitive psychology in particular, is to construct theoretical accounts of mental processes based on observed changes in performance on one or more cognitive tasks. The fundamental problem faced by the researcher is that these mental processes are not directly observable but must be inferred from changes in performance between different experimental conditions. This inference is further complicated by the fact that performance measures may only be monotonically related to the underlying psychological constructs. State-trace analysis provides an approach to this problem which has gained increasing interest in recent years. In this talk, I explain state-trace analysis and discuss the set of mathematical issues that flow from it. Principal among these are the challenges of statistical inference and an unexpected connection to the mathematics of oriented matroids.
Some thoughts on wine production
15:05 Fri 18 Jun, 2010 :: School Board Room :: Prof Zbigniew Michalewicz :: School of Computer Science, University of Adelaide

In the modern information era, managers (e.g. winemakers) recognize the competitive opportunities represented by decision-support tools which can provide a significant cost savings & revenue increases for their businesses. Wineries make daily decisions on the processing of grapes, from harvest time (prediction of maturity of grapes, scheduling of equipment and labour, capacity planning, scheduling of crushers) through tank farm activities (planning and scheduling of wine and juice transfers on the tank farm) to packaging processes (bottling and storage activities). As such operation is quite complex, the whole area is loaded with interesting OR-related issues. These include the issues of global vs. local optimization, relationship between prediction and optimization, operating in dynamic environments, strategic vs. tactical optimization, and multi-objective optimization & trade-off analysis. During the talk we address the above issues; a few real-world applications will be shown and discussed to emphasize some of the presented material.
Compound and constrained regression analyses for EIV models
15:05 Fri 27 Aug, 2010 :: Napier LG28 :: Prof Wei Zhu :: State University of New York at Stony Brook

In linear regression analysis, randomness often exists in the independent variables and the resulting models are referred to errors-in-variables (EIV) models. The existing general EIV modeling framework, the structural model approach, is parametric and dependent on the usually unknown underlying distributions. In this work, we introduce a general non-parametric EIV modeling framework, the compound regression analysis, featuring an intuitive geometric representation and a 1-1 correspondence to the structural model. Properties, examples and further generalizations of this new modeling approach are discussed in this talk.
The mathematics of smell
15:10 Wed 29 Sep, 2010 :: Ingkarni Wardli 5.57 :: Dr Michael Borgas :: CSIRO Light Metals Flagship; Marine and Atmospheric Research; Centre for Australian Weather and Clim

The sense of smell is important in nature, but the least well understood of our senses. A mathematical model of smell, which combines the transmission of volatile-organic-compound chemical signals (VOCs) on the wind, transduced by olfactory receptors in our noses into neural information, and assembled into our odour perception, is useful. Applications include regulations for odour nuisance, like German VDI protocols for calibrated noses, to the design of modern chemical sensors for extracting information from the environment and even for the perfume industry. This talk gives a broad overview of turbulent mixing in surface layers of the atmosphere, measurements of VOCs with PTR-MS (proton transfer reaction mass spectrometers), our noses, and integrated environmental models of the Alumina industry (a source of odour emissions) to help understand the science of smell.
Principal Component Analysis Revisited
15:10 Fri 15 Oct, 2010 :: Napier G04 :: Assoc. Prof Inge Koch :: University of Adelaide

Since the beginning of the 20th century, Principal Component Analysis (PCA) has been an important tool in the analysis of multivariate data. The principal components summarise data in fewer than the original number of variables without losing essential information, and thus allow a split of the data into signal and noise components. PCA is a linear method, based on elegant mathematical theory. The increasing complexity of data together with the emergence of fast computers in the later parts of the 20th century has led to a renaissance of PCA. The growing numbers of variables (in particular, high-dimensional low sample size problems), non-Gaussian data, and functional data (where the data are curves) are posing exciting challenges to statisticians, and have resulted in new research which extends the classical theory. I begin with the classical PCA methodology and illustrate the challenges presented by the complex data that we are now able to collect. The main part of the talk focuses on extensions of PCA: the duality of PCA and the Principal Coordinates of Multidimensional Scaling, Sparse PCA, and consistency results relating to principal components, as the dimension grows. We will also look at newer developments such as Principal Component Regression and Supervised PCA, nonlinear PCA and Functional PCA.
Slippery issues in nano- and microscale fluid flows
11:10 Tue 30 Nov, 2010 :: Innova teaching suite B21 :: Dr Shaun C. Hendy :: Victoria University of Wellington

The no-slip boundary condition was considered to have been experimentally established for the flow of simple liquids over solid surfaces in the early 20th century. Nonetheless the refinement of a number of measurement techniques has recently led to the observation of nano- and microscale violations of the no-slip boundary condition by simple fluids flowing over non-wetting surfaces. However it is important to distinguish between intrinsic slip, which arises solely from the chemical interaction between the liquid and a homogeneous, atomically flat surface and effective slip, typically measured in macroscopic experiments, which emerges from the interaction of microscopic chemical heterogeneity, roughness and contaminants. Here we consider the role of both intrinsic and effective slip boundary conditions in nanoscale and microscale fluid flows using a theoretical approach, complemented by molecular dynamics simulations, and experimental evidence where available. Firstly, we consider nanoscale flows in small capillaries, including carbon nanotubes, where we have developed and solved a generalised Lucas-Washburn equation that incorporates slip to describe the uptake of droplets. We then consider the general problem of relating effective slip to microscopic intrinsic slip and roughness, and discuss several cases where we have been able to solve this problem analytically. Finally, we look at applications of these results to carbon nanotube growth, self-cleaning surfaces, catalysis, and putting insulation in your roof.
Bioinspired computation in combinatorial optimization: algorithms and their computational complexity
15:10 Fri 11 Mar, 2011 :: 7.15 Ingkarni Wardli :: Dr Frank Neumann :: The University of Adelaide

Media...
Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems. The computational complexity analysis of this type of algorithms has significantly increased the theoretical understanding of these successful algorithms. In this talk, I will give an introduction into this field of research and present some important results that we achieved for problems from combinatorial optimization. These results can also be found in my recent textbook "Bioinspired Computation in Combinatorial Optimization -- Algorithms and Their Computational Complexity".
Classification for high-dimensional data
15:10 Fri 1 Apr, 2011 :: Conference Room Level 7 Ingkarni Wardli :: Associate Prof Inge Koch :: The University of Adelaide

For two-class classification problems Fisher's discriminant rule performs well in many scenarios provided the dimension, d, is much smaller than the sample size n. As the dimension increases, Fisher's rule may no longer be adequate, and can perform as poorly as random guessing. In this talk we look at new ways of overcoming this poor performance for high-dimensional data by suitably modifying Fisher's rule, and in particular we describe the 'Features Annealed Independence Rule (FAIR)? of Fan and Fan (2008) and a rule based on canonical correlation analysis. I describe some theoretical developments, and also show analysis of data which illustrate the performance of these modified rule.
A strong Oka principle for embeddings of some planar domains into CxC*, I
13:10 Fri 6 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide

The Oka principle refers to a collection of results in complex analysis which state that there are only topological obstructions to solving certain holomorphically defined problems involving Stein manifolds. For example, a basic version of Gromov's Oka principle states that every continuous map from a Stein manifold into an elliptic complex manifold is homotopic to a holomorphic map. In these two talks I will discuss a new result showing that if we restrict the class of source manifolds to circular domains and fix the target as CxC* we can obtain a much stronger Oka principle: every continuous map from a circular domain S into CxC* is homotopic to a proper holomorphic embedding. This result has close links with the long-standing and difficult problem of finding proper holomorphic embeddings of Riemann surfaces into C^2, with additional motivation from other sources.
When statistics meets bioinformatics
12:10 Wed 11 May, 2011 :: Napier 210 :: Prof Patty Solomon :: School of Mathematical Sciences

Media...
Bioinformatics is a new field of research which encompasses mathematics, computer science, biology, medicine and the physical sciences. It has arisen from the need to handle and analyse the vast amounts of data being generated by the new genomics technologies. The interface of these disciplines used to be information-poor, but is now information-mega-rich, and statistics plays a central role in processing this information and making it intelligible. In this talk, I will describe a published bioinformatics study which claimed to have developed a simple test for the early detection of ovarian cancer from a blood sample. The US Food and Drug Administration was on the verge of approving the test kits for market in 2004 when demonstrated flaws in the study design and analysis led to its withdrawal. We are still waiting for an effective early biomarker test for ovarian cancer.
A strong Oka principle for embeddings of some planar domains into CxC*, II
13:10 Fri 13 May, 2011 :: Mawson 208 :: Mr Tyson Ritter :: University of Adelaide

The Oka principle refers to a collection of results in complex analysis which state that there are only topological obstructions to solving certain holomorphically defined problems involving Stein manifolds. For example, a basic version of Gromov's Oka principle states that every continuous map from a Stein manifold into an elliptic complex manifold is homotopic to a holomorphic map. In these two talks I will discuss a new result showing that if we restrict the class of source manifolds to circular domains and fix the target as CxC* we can obtain a much stronger Oka principle: every continuous map from a circular domain S into CxC* is homotopic to a proper holomorphic embedding. This result has close links with the long-standing and difficult problem of finding proper holomorphic embeddings of Riemann surfaces into C^2, with additional motivation from other sources.
Change detection in rainfall time series for Perth, Western Australia
12:10 Mon 16 May, 2011 :: 5.57 Ingkarni Wardli :: Farah Mohd Isa :: University of Adelaide

There have been numerous reports that the rainfall in south Western Australia, particularly around Perth has observed a step change decrease, which is typically attributed to climate change. Four statistical tests are used to assess the empirical evidence for this claim on time series from five meteorological stations, all of which exceed 50 years. The tests used in this study are: the CUSUM; Bayesian Change Point analysis; consecutive t-test and the Hotelling’s T²-statistic. Results from multivariate Hotelling’s T² analysis are compared with those from the three univariate analyses. The issue of multiple comparisons is discussed. A summary of the empirical evidence for the claimed step change in Perth area is given.
Permeability of heterogeneous porous media - experiments, mathematics and computations
15:10 Fri 27 May, 2011 :: B.21 Ingkarni Wardli :: Prof Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University

Permeability is a key parameter important to a variety of applications in geological engineering and in the environmental geosciences. The conventional definition of Darcy flow enables the estimation of permeability at different levels of detail. This lecture will focus on the measurement of surface permeability characteristics of a large cuboidal block of Indiana Limestone, using a surface permeameter. The paper discusses the theoretical developments, the solution of the resulting triple integral equations and associated computational treatments that enable the mapping of the near surface permeability of the cuboidal region. This data combined with a kriging procedure is used to develop results for the permeability distribution at the interior of the cuboidal region. Upon verification of the absence of dominant pathways for fluid flow through the cuboidal region, estimates are obtained for the "Effective Permeability" of the cuboid using estimates proposed by Wiener, Landau and Lifschitz, King, Matheron, Journel et al., Dagan and others. The results of these estimates are compared with the geometric mean, derived form the computational estimates.
Optimal experimental design for stochastic population models
15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.
Inference and optimal design for percolation and general random graph models (Part I)
09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Inference and optimal design for percolation and general random graph models (Part II)
10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Quantitative proteomics: data analysis and statistical challenges
10:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Peter Hoffmann :: Adelaide Proteomics Centre

Introduction to functional data analysis with applications to proteomics data
11:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: A/Prof Inge Koch :: School of Mathematical Sciences

Object oriented data analysis
14:10 Thu 30 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill

Object Oriented Data Analysis is the statistical analysis of populations of complex objects. In the special case of Functional Data Analysis, these data objects are curves, where standard Euclidean approaches, such as principal components analysis, have been very successful. Recent developments in medical image analysis motivate the statistical analysis of populations of more complex data objects which are elements of mildly non-Euclidean spaces, such as Lie Groups and Symmetric Spaces, or of strongly non-Euclidean spaces, such as spaces of tree-structured data objects. These new contexts for Object Oriented Data Analysis create several potentially large new interfaces between mathematics and statistics. Even in situations where Euclidean analysis makes sense, there are statistical challenges because of the High Dimension Low Sample Size problem, which motivates a new type of asymptotics leading to non-standard mathematical statistics.
Object oriented data analysis of tree-structured data objects
15:10 Fri 1 Jul, 2011 :: 7.15 Ingkarni Wardli :: Prof Steve Marron :: The University of North Carolina at Chapel Hill

The field of Object Oriented Data Analysis has made a lot of progress on the statistical analysis of the variation in populations of complex objects. A particularly challenging example of this type is populations of tree-structured objects. Deep challenges arise, which involve a marriage of ideas from statistics, geometry, and numerical analysis, because the space of trees is strongly non-Euclidean in nature. These challenges, together with three completely different approaches to addressing them, are illustrated using a real data example, where each data point is the tree of blood arteries in one person's brain.
Dealing with the GC-content bias in second-generation DNA sequence data
15:10 Fri 12 Aug, 2011 :: Horace Lamb :: Prof Terry Speed :: Walter and Eliza Hall Institute

Media...
The field of genomics is currently dealing with an explosion of data from so-called second-generation DNA sequencing machines. This is creating many challenges and opportunities for statisticians interested in the area. In this talk I will outline the technology and the data flood, and move on to one particular problem where the technology is used: copy-number analysis. There we find a novel bias, which, if not dealt with properly, can dominate the signal of interest. I will describe how we think about and summarize it, and go on to identify a plausible source of this bias, leading up to a way of removing it. Our approach makes use of the total variation metric on discrete measures, but apart from this, is largely descriptive.
Alignment of time course gene expression data sets using Hidden Markov Models
12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide

Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.
Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching
12:10 Mon 19 Sep, 2011 :: 5.57 Ingkarni Wardli :: Ms Susana Soto-Rojo :: University of Adelaide

In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from low-grade secondary sulfide ores. Genetics-based approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium.
Estimating disease prevalence in hidden populations
14:05 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford

Estimating disease prevalence in "hidden" populations such as injecting drug users or men who have sex with men is an important public health issue. However, traditional design-based estimation methods are inappropriate because they assume that a list of all members of the population is available from which to select a sample. Respondent Driven Sampling (RDS) is a method developed over the last 15 years for sampling from hidden populations. Similarly to snowball sampling, it leverages the fact that members of hidden populations are often socially connected to one another. Although RDS is now used around the world, there are several common population characteristics which are known to cause estimates calculated from such samples to be significantly biased. In this talk I'll discuss the motivation for RDS, as well as some of the recent developments in methods of estimation.
Statistical analysis of school-based student performance data
12:10 Mon 10 Oct, 2011 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide

Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program - Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk.
Statistical modelling for some problems in bioinformatics
11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland

Media...
In this talk we consider some statistical analyses of data arising in bioinformatics. The problems include the detection of differential expression in microarray gene-expression data, the clustering of time-course gene-expression data and, lastly, the analysis of modern-day cytometric data. Extensions are considered to the procedures proposed for these three problems in McLachlan et al. (Bioinformatics, 2006), Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively. The latter references are available at http://www.maths.uq.edu.au/~gjm/.
On the role of mixture distributions in the modelling of heterogeneous data
15:10 Fri 14 Oct, 2011 :: 7.15 Ingkarni Wardli :: Prof Geoff McLachlan :: University of Queensland

Media...
We consider the role that finite mixture distributions have played in the modelling of heterogeneous data, in particular for clustering continuous data via mixtures of normal distributions. A very brief history is given starting with the seminal papers by Day and Wolfe in the sixties before the appearance of the EM algorithm. It was the publication in 1977 of the latter algorithm by Dempster, Laird, and Rubin that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood estimation from data that can be viewed as being incomplete. In recent times there has been a proliferation of applications in which the number of experimental units n is comparatively small but the underlying dimension p is extremely large as, for example, in microarray-based genomics and other high-throughput experimental approaches. Hence there has been increasing attention given not only in bioinformatics and machine learning, but also in mainstream statistics, to the analysis of complex data in this situation where n is small relative to p. The latter part of the talk shall focus on the modelling of such high-dimensional data using mixture distributions.
Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis
15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales

Media...
A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.
Mathematical opportunities in molecular space
15:10 Fri 28 Oct, 2011 :: B.18 Ingkarni Wardli :: Dr Aaron Thornton :: CSIRO

The study of molecular motion, interaction and space at the nanoscale has become a powerful tool in the area of gas separation, storage and conversion for efficient energy solutions. Modeling in this field has typically involved highly iterative computational algorithms such as molecular dynamics, Monte Carlo and quantum mechanics. Mathematical formulae in the form of analytical solutions to this field offer a range of useful and insightful advantages including optimization, bifurcation analysis and standardization. Here we present a few case scenarios where mathematics has provided insight and opportunities for further investigation.
Metric geometry in data analysis
13:10 Fri 11 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Facundo Memoli :: University of Adelaide

The problem of object matching under invariances can be studied using certain tools from metric geometry. The central idea is to regard objects as metric spaces (or metric measure spaces). The type of invariance that one wishes to have in the matching is encoded by the choice of the metrics with which one endows the objects. The standard example is matching objects in Euclidean space under rigid isometries: in this situation one would endow the objects with the Euclidean metric. More general scenarios are possible in which the desired invariance cannot be reflected by the preservation of an ambient space metric. Several ideas due to M. Gromov are useful for approaching this problem. The Gromov-Hausdorff distance is a natural candidate for doing this. However, this metric leads to very hard combinatorial optimization problems and it is difficult to relate to previously reported practical approaches to the problem of object matching. I will discuss different variations of these ideas, and in particular will show a construction of an L^p version of the Gromov-Hausdorff metric, called the Gromov-Wassestein distance, which is based on mass transportation ideas. This new metric directly leads to quadratic optimization problems on continuous variables with linear constraints. As a consequence of establishing several lower bounds, it turns out that several invariants of metric measure spaces turn out to be quantitatively stable in the GW sense. These invariants provide practical tools for the discrimination of shapes and connect the GW ideas to a number of pre-existing approaches.
Stability analysis of nonparallel unsteady flows via separation of variables
15:30 Fri 18 Nov, 2011 :: 7.15 Ingkarni Wardli :: Prof Georgy Burde :: Ben-Gurion University

Media...
The problem of variables separation in the linear stability equations, which govern the disturbance behavior in viscous incompressible fluid flows, is discussed. Stability of some unsteady nonparallel three-dimensional flows (exact solutions of the Navier-Stokes equations) is studied via separation of variables using a semi-analytical, semi-numerical approach. In this approach, a solution with separated variables is defined in a new coordinate system which is sought together with the solution form. As the result, the linear stability problems are reduced to eigenvalue problems for ordinary differential equations which can be solved numerically. In some specific cases, the eigenvalue problems can be solved analytically. Those unique examples of exact (explicit) solution of the nonparallel unsteady flow stability problems provide a very useful test for methods used in the hydrodynamic stability theory. Exact solutions of the stability problems for some stagnation-type flows are presented.
Fluid flows in microstructured optical fibre fabrication
15:10 Fri 25 Nov, 2011 :: B.17 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of Adelaide

Optical fibres are used extensively in modern telecommunications as they allow the transmission of information at high speeds. Microstructured optical fibres are a relatively new fibre design in which a waveguide for light is created by a series of air channels running along the length of the material. The flexibility of this design allows optical fibres to be created with adaptable (and previously unrealised) optical properties. However, the fluid flows that arise during fabrication can greatly distort the geometry, which can reduce the effectiveness of a fibre or render it useless. I will present an overview of the manufacturing process and highlight the difficulties. I will then focus on surface-tension driven deformation of the macroscopic version of the fibre extruded from a reservoir of molten glass, occurring during fabrication, which will be treated as a two-dimensional Stokes flow problem. I will outline two different complex-variable numerical techniques for solving this problem along with comparisons of the results, both to other models and to experimental data.
Collision and instability in a rotating fluid-filled torus
15:10 Mon 12 Dec, 2011 :: Benham Lecture Theatre :: Dr Richard Clarke :: The University of Auckland

The simple experiment discussed in this talk, first conceived by Madden and Mullin (JFM, 1994) as part of their investigations into the non-uniqueness of decaying turbulent flow, consists of a fluid-filled torus which is rotated in an horizontal plane. Turbulence within the contained flow is triggered through a rapid change in its rotation rate. The flow instabilities which transition the flow to this turbulent state, however, are truly fascinating in their own right, and form the subject of this presentation. Flow features observed in both UK- and Auckland-based experiments will be highlighted, and explained through both boundary-layer analysis and full DNS. In concluding we argue that this flow regime, with its compact geometry and lack of cumbersome flow entry effects, presents an ideal regime in which to study many prototype flow behaviours, very much in the same spirit as Taylor-Couette flow.
Financial risk measures - the theory and applications of backward stochastic difference/differential equations with respect to the single jump process
12:10 Mon 26 Mar, 2012 :: 5.57 Ingkarni Wardli :: Mr Bin Shen :: University of Adelaide

Media...
This is my PhD thesis submitted one month ago. Chapter 1 introduces the backgrounds of the research fields. Then each chapter is a published or an accepted paper. Chapter 2, to appear in Methodology and Computing in Applied Probability, establishes the theory of Backward Stochastic Difference Equations with respect to the single jump process in discrete time. Chapter 3, published in Stochastic Analysis and Applications, establishes the theory of Backward Stochastic Differential Equations with respect to the single jump process in continuous time. Chapter 2 and 3 consist of Part I Theory. Chapter 4, published in Expert Systems With Applications, gives some examples about how to measure financial risks by the theory established in Chapter 2. Chapter 5, accepted by Journal of Applied Probability, considers the question of an optimal transaction between two investors to minimize their risks. It's the applications of the theory established in Chapter 3. Chapter 4 and 5 consist of Part II Applications.
Mathematical modelling of the surface adsorption for methane on carbon nanostructures
12:10 Mon 30 Apr, 2012 :: 5.57 Ingkarni Wardli :: Mr Olumide Adisa :: University of Adelaide

Media...
In this talk, methane (CH4) adsorption is investigated on both graphite and in the region between two aligned single-walled carbon nanotubes, which we refer to as the groove site. The Lennard–Jones potential function and the continuous approximation is exploited to determine surface binding energies between a single CH4 molecule and graphite and between a single CH4 and two aligned single-walled carbon nanotubes. The modelling indicates that for a CH4 molecule interacting with graphite, the binding energy of the system is minimized when the CH4 carbon is 3.83 angstroms above the surface of the graphitic carbon, while the binding energy of the CH4–groove site system is minimized when the CH4 carbon is 5.17 angstroms away from the common axis shared by the two aligned single-walled carbon nanotubes. These results confirm the current view that for larger groove sites, CH4 molecules in grooves are likely to move towards the outer surfaces of one of the single-walled carbon nanotubes. The results presented in this talk are computationally efficient and are in good agreement with experiments and molecular dynamics simulations, and show that CH4 adsorption on graphite and groove surfaces is more favourable at lower temperatures and higher pressures.
Are Immigrants Discriminated in the Australian Labour Market?
12:10 Mon 7 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Wei Xian Lim :: University of Adelaide

Media...
In this talk, I will present what I did in my honours project, which was to determine if immigrants, categorised as immigrants from English speaking countries and Non-English speaking countries, are discriminated in the Australian labour market. To determine if discrimination exists, a decomposition of the wage function is applied and analysed via regression analysis. Two different methods of estimating the unknown parameters in the wage function will be discussed: 1. the Ordinary Least Square method, 2. the Quantile Regression method. This is your rare chance of hearing me talk about non-nanomathematics related stuff!
Change detection in rainfall times series for Perth, Western Australia
12:10 Mon 14 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Farah Mohd Isa :: University of Adelaide

Media...
There have been numerous reports that the rainfall in south Western Australia, particularly around Perth has observed a step change decrease, which is typically attributed to climate change. Four statistical tests are used to assess the empirical evidence for this claim on time series from five meteorological stations, all of which exceed 50 years. The tests used in this study are: the CUSUM; Bayesian Change Point analysis; consecutive t-test and the Hotelling's T^2-statistic. Results from multivariate Hotelling's T^2 analysis are compared with those from the three univariate analyses. The issue of multiple comparisons is discussed. A summary of the empirical evidence for the claimed step change in Perth area is given.
P or NP: this is the question
13:10 Tue 22 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Ali Eshragh :: School of Mathematical Sciences

Media...
Up to early 70's, the main concentration of mathematicians was the design of algorithms. However, the advent of computers changed this focus from not just the design of an algorithm but also to the most efficient algorithm. This created a new field of research, namely the complexity of algorithms, and the associated problem "Is P equal to NP?" was born. The latter question has been unknown for more than four decades and is one of the most famous open problems of the 21st century. Any person who can solve this problem will be awarded US$1,000,000 by the Clay Institute. In this talk, we are going to introduce this problem through simple examples and explain one of the intriguing approaches that may help to solve it.
Introduction to quantales via axiomatic analysis
13:10 Fri 15 Jun, 2012 :: Napier LG28 :: Dr Ittay Weiss :: University of the South Pacific

Quantales were introduced by Mulvey in 1986 in the context of non-commutative topology with the aim of providing a concrete non-commutative framework for the foundations of quantum mechanics. Since then quantales found applications in other areas as well, among others in the work of Flagg. Flagg considers certain special quantales, called value quantales, that are desigend to capture the essential properties of ([0,\infty],\le,+) that are relevant for analysis. The result is a well behaved theory of value quantale enriched metric spaces. I will introduce the notion of quantales as if they were desigend for just this purpose, review most of the known results (since there are not too many), and address a some new results, conjectures, and questions.
Hodge numbers and cohomology of complex algebraic varieties
13:10 Fri 10 Aug, 2012 :: Engineering North 218 :: Prof Gus Lehrer :: University of Sydney

Let $X$ be a complex algebraic variety defined over the ring $\mathfrak{O}$ of integers in a number field $K$ and let $\Gamma$ be a group of $\mathfrak{O}$-automorphisms of $X$. I shall discuss how the counting of rational points over reductions mod $p$ of $X$, and an analysis of the Hodge structure of the cohomology of $X$, may be used to determine the cohomology as a $\Gamma$-module. This will include some joint work with Alex Dimca and with Mark Kisin, and some classical unsolved problems.
Drawing of Viscous Threads with Temperature-dependent Viscosity
14:10 Fri 10 Aug, 2012 :: Engineering North N218 :: Dr Jonathan Wylie :: City University of Hong Kong

The drawing of viscous threads is important in a wide range of industrial applications and is a primary manufacturing process in the optical fiber and textile industries. Most of the materials used in these processes have viscosities that vary extremely strongly with temperature. We investigate the role played by viscous heating in the drawing of viscous threads. Usually, the effects of viscous heating and inertia are neglected because the parameters that characterize them are typically very small. However, by performing a detailed theoretical analysis we surprisingly show that even very small amounts of viscous heating can lead to a runaway phenomena. On the other hand, inertia prevents runaway, and the interplay between viscous heating and inertia results in very complicated dynamics for the system. Even more surprisingly, in the absence of viscous heating, we find that a new type of instability can occur when a thread is heated by a radiative heat source. By analyzing an asymptotic limit of the Navier-Stokes equation we provide a theory that describes the nature of this instability and explains the seemingly counterintuitive behavior.
Air-cooled binary Rankine cycle performance with varying ambient temperature
12:10 Mon 13 Aug, 2012 :: B.21 Ingkarni Wardli :: Ms Josephine Varney :: University of Adelaide

Media...
Next month, I have to give a presentation in Reno, Nevada to a group of geologists, engineers and geophysicists. So, for this talk, I am going to ask you to pretend you know very little about maths (and perhaps a lot about geology) and give me some feedback on my proposed talk. The presentation itself, is about the effect of air-cooling on geothermal power plant performance. Air-cooling is necessary for geothermal plays in dry areas, and ambient air temperature significantly affects the power output of air-cooled geothermal power plants. Hence, a method for determining the effect of ambient air temperature on geothermal power plants is presented. Using the ambient air temperature distribution from Leigh Creek, South Australia, this analysis shows that an optimally designed plant produces 6% more energy annually than a plant designed using the mean ambient temperature.
Star Wars Vs The Lord of the Rings: A Survival Analysis
12:10 Mon 27 Aug, 2012 :: B.21 Ingkarni Wardli :: Mr Christopher Davies :: University of Adelaide

Media...
Ever wondered whether you are more likely to die in the Galactic Empire or Middle Earth? Well this is the postgraduate seminar for you! I'll be attempting to answer this question using survival analysis, the statistical method of choice for investigating time to event data. Spoiler Warning: This talk will contain references to the deaths of characters in the above movie sagas.
Principal Component Analysis (PCA)
12:30 Mon 3 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of Adelaide

Media...
Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.
Two classes of network structures that enable efficient information transmission
15:10 Fri 7 Sep, 2012 :: B.20 Ingkarni Wardli :: A/Prof Sanming Zhou :: The University of Melbourne

Media...
What network topologies should we use in order to achieve efficient information transmission? Of course answer to this question depends on how we measure efficiency of information dissemination. If we measure it by the minimum gossiping time under the store-and-forward, all-port and full-duplex model, we show that certain Cayley graphs associated with Frobenius groups are `perfect' in a sense. (A Frobenius group is a permutation group which is transitive but not regular such that only the identity element can fix two points.) Such graphs are also optimal for all-to-all routing in the sense that the maximum load on edges achieves the minimum. In this talk we will discuss this theory of optimal network design.
Electrokinetics of concentrated suspensions of spherical particles
15:10 Fri 28 Sep, 2012 :: B.21 Ingkarni Wardli :: Dr Bronwyn Bradshaw-Hajek :: University of South Australia

Electrokinetic techniques are used to gather specific information about concentrated dispersions such as electronic inks, mineral processing slurries, pharmaceutical products and biological fluids (e.g. blood). But, like most experimental techniques, intermediate quantities are measured, and consequently the method relies explicitly on theoretical modelling to extract the quantities of experimental interest. A self-consistent cell-model theory of electrokinetics can be used to determine the electrical conductivity of a dense suspension of spherical colloidal particles, and thereby determine the quantities of interest (such as the particle surface potential). The numerical predictions of this model compare well with published experimental results. High frequency asymptotic analysis of the cell-model leads to some interesting conclusions.
Turbulent flows, semtex, and rainbows
12:10 Mon 8 Oct, 2012 :: B.21 Ingkarni Wardli :: Ms Sophie Calabretto :: University of Adelaide

Media...
The analysis of turbulence in transient flows has applications across a broad range of fields. We use the flow of fluid in a toroidal container as a paradigm for studying the complex dynamics due to this turbulence. To explore the dynamics of our system, we exploit the numerical capabilities of semtex; a quadrilateral spectral element DNS code. Rainbows result.
Complex analysis in low Reynolds number hydrodynamics
15:10 Fri 12 Oct, 2012 :: B.20 Ingkarni Wardli :: Prof Darren Crowdy :: Imperial College London

Media...
It is a well-known fact that the methods of complex analysis provide great advantage in studying physical problems involving a harmonic field satisfying Laplace's equation. One example is in ideal fluid mechanics (infinite Reynolds number) where the absence of viscosity, and the assumption of zero vorticity, mean that it is possible to introduce a so-called complex potential -- an analytic function from which all physical quantities of interest can be inferred. In the opposite limit of zero Reynolds number flows which are slow and viscous and the governing fields are not harmonic it is much less common to employ the methods of complex analysis even though they continue to be relevant in certain circumstances. This talk will give an overview of a variety of problems involving slow viscous Stokes flows where complex analysis can be usefully employed to gain theoretical insights. A number of example problems will be considered including the locomotion of low-Reynolds-number micro-organisms and micro-robots, the friction properties of superhydrophobic surfaces in microfluidics and problems of viscous sintering and the manufacture of microstructured optic fibres (MOFs).
Optimal Experimental Design: What Is It?
12:10 Mon 15 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr David Price :: University of Adelaide

Media...
Optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. That answers the question, right? But what do I mean by 'optimal', and which 'statistical criterion' should you use? In this talk I will answer all these questions, and provide an overly simple example to demonstrate how optimal design works. I will then give a brief explanation of how I will use this methodology, and what chickens have to do with it.
Epidemic models in socially structured populations: when are simple models too simple?
14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick

Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for real-time predictions.
Epidemic models in socially structured populations: when are simple models too simple?
14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick

Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for real-time predictions.
On the chromatic number of a random hypergraph
13:10 Fri 22 Mar, 2013 :: Ingkarni Wardli B21 :: Dr Catherine Greenhill :: University of New South Wales

A hypergraph is a set of vertices and a set of hyperedges, where each hyperedge is a subset of vertices. A hypergraph is r-uniform if every hyperedge contains r vertices. A colouring of a hypergraph is an assignment of colours to vertices such that no hyperedge is monochromatic. When the colours are drawn from the set {1,..,k}, this defines a k-colouring. We consider the problem of k-colouring a random r-uniform hypergraph with n vertices and cn edges, where k, r and c are constants and n tends to infinity. In this setting, Achlioptas and Naor showed that for the case of r = 2, the chromatic number of a random graph must have one of two easily computable values as n tends to infinity. I will describe some joint work with Martin Dyer (Leeds) and Alan Frieze (Carnegie Mellon), in which we generalised this result to random uniform hypergraphs. The argument uses the second moment method, and applies a general theorem for performing Laplace summation over a lattice. So the proof contains something for everyone, with elements from combinatorics, analysis and algebra.
A stability theorem for elliptic Harnack inequalities
15:10 Fri 5 Apr, 2013 :: B.18 Ingkarni Wardli :: Prof Richard Bass :: University of Connecticut

Media...
Harnack inequalities are an important tool in probability theory, analysis, and partial differential equations. The classical Harnack inequality is just the one you learned in your graduate complex analysis class, but there have been many extensions, to different spaces, such as manifolds, fractals, infinite graphs, and to various sorts of elliptic operators. A landmark result was that of Moser in 1961, where he proved the Harnack inequality for solutions to a class of partial differential equations. I will talk about the stability of Harnack inequalities. The main result says that if the Harnack inequality holds for an operator on a space, then the Harnack inequality will also hold for a large class of other operators on that same space. This provides a generalization of the result of Moser.
Pulsatile Flow
12:10 Mon 20 May, 2013 :: B.19 Ingkarni Wardli :: David Wilke :: University of Adelaide

Media...
Blood flow within the human arterial system is inherently unsteady as a consequence of the pulsations of the heart. The unsteady nature of the flow gives rise to a number of important flow features which may be critical in understanding pathologies of the cardiovascular system. For example, it is believed that large oscillations in wall shear stress may enhance the effects of artherosclerosis, among other pathologies. In this talk I will present some of the basic concepts of pulsatile flow and follow the analysis first performed by J.R. Womersley in his seminal 1955 paper.
Multiscale modelling couples patches of wave-like simulations
12:10 Mon 27 May, 2013 :: B.19 Ingkarni Wardli :: Meng Cao :: University of Adelaide

Media...
A multiscale model is proposed to significantly reduce the expensive numerical simulations of complicated waves over large spatial domains. The multiscale model is built from given microscale simulations of complicated physical processes such as sea ice or turbulent shallow water. Our long term aim is to enable macroscale simulations obtained by coupling small patches of simulations together over large physical distances. This initial work explores the coupling of patch simulations of wave-like pdes. With the line of development being to water waves we discuss the dynamics of two complementary fields called the 'depth' h and 'velocity' u. A staggered grid is used for the microscale simulation of the depth h and velocity u. We introduce a macroscale staggered grid to couple the microscale patches. Linear or quadratic interpolation provides boundary conditions on the field in each patch. Linear analysis of the whole coupled multiscale system establishes that the resultant macroscale dynamics is appropriate. Numerical simulations support the linear analysis. This multiscale method should empower the feasible computation of large scale simulations of wave-like dynamics with complicated underlying physics.
Medical Decision Analysis
12:10 Mon 2 Sep, 2013 :: B.19 Ingkarni Wardli :: Eka Baker :: University of Adelaide

Doctors make life changing decisions every day based on clinical trial data. However, this data is often obtained from studies on healthy individuals or on patients with only the disease that a treatment is targeting. Outside of these studies, many patients will have other conditions that may affect the predicted benefit of receiving a certain treatment. I will talk about what clinical trials are, how to measure the benefit of treatments, and how having multiple conditions (comorbidities) will affect the benefit of treatments.
Thin-film flow in helical channels
12:10 Mon 9 Sep, 2013 :: B.19 Ingkarni Wardli :: David Arnold :: University of Adelaide

Media...
Spiral particle separators are used in the mineral processing industry to refine ores. A slurry, formed by mixing crushed ore with a fluid, is run down a helical channel and at the end of the channel, the particles end up sorted in different sections of the channel. Design of such devices is largely experimentally based, and mathematical modelling of flow in helical channels is relatively limited. In this talk, I will outline some of the work that I have been doing on thin-film flow in helical channels.
Dynamics and the geometry of numbers
14:10 Fri 27 Sep, 2013 :: Horace Lamb Lecture Theatre :: Prof Akshay Venkatesh :: Stanford University

Media...
It was understood by Minkowski that one could prove interesting results in number theory by considering the geometry of lattices in R^n. (A lattice is simply a grid of points.) This technique is called the "geometry of numbers." We now understand much more about analysis and dynamics on the space of all lattices, and this has led to a deeper understanding of classical questions. I will review some of these ideas, with emphasis on the dynamical aspects.
Gravitational slingshot and space mission design
15:10 Fri 11 Oct, 2013 :: B.18 Ingkarni Wardli :: Prof Pawel Nurowski :: Polish Academy of Sciences

Media...
When planning a space mission the weight of the spacecraft is the main issue. Every gram sent into the outer space costs a lot. A considerable part of the overall weight of the spaceship consists of a fuel needed to control it. I will explain how space agencies reduce the amount of fuel needed to go to a given place in the Solar System by using gravity of celestial bodies encountered along the trip. I will start with the explanation of an old trick called `gravitational slingshot', and end up with a modern technique which is based on the analysis of a 3-body problem appearing in Newtonian mechanics.
Classification Using Censored Functional Data
15:10 Fri 18 Oct, 2013 :: B.18 Ingkarni Wardli :: A/Prof Aurore Delaigle :: University of Melbourne

Media...
We consider classification of functional data. This problem has received a lot of attention in the literature in the case where the curves are all observed on the same interval. A difficulty in applications is that the functional curves can be supported on quite different intervals, in which case standard methods of analysis cannot be used. We are interested in constructing classifiers for curves of this type. More precisely, we consider classification of functions supported on a compact interval, in cases where the training sample consists of functions observed on other intervals, which may differ among the training curves. We propose several methods, depending on whether or not the observable intervals overlap by a significant amount. In the case where these intervals differ a lot, our procedure involves extending the curves outside the interval where they were observed. We suggest a new nonparametric approach for doing this. We also introduce flexible ways of combining potential differences in shapes of the curves from different populations, and potential differences between the endpoints of the intervals where the curves from each population are observed.
Group meeting
15:10 Fri 25 Oct, 2013 :: 5.58 (Ingkarni Wardli) :: Dr Ben Binder and Mr David Wilke :: University of Adelaide

Dr Ben Binder :: 'An inverse approach for solutions to free-surface flow problems' :: Abstract: Surface water waves are familiar to most people, for example, the wave pattern generated at the stern of a ship. The boundary or interface between the air and water is called the free-surface. When determining a solution to a free-surface flow problem it is commonplace for the forcing (eg. shape of ship or waterbed topography) that creates the surface waves to be prescribed, with the free-surface coming as part of the solution. Alternatively, one can choose to prescribe the shape of the free-surface and find the forcing inversely. In this talk I will discuss my ongoing work using an inverse approach to discover new types of solutions to free-surface flow problems in two and three dimensions, and how the predictions of the method might be verified with experiments. :: Mr David Wilke:: 'A Computational Fluid Dynamic Study of Blood Flow Within the Coiled Umbilical Arteries':: Abstract: The umbilical cord is the lifeline of the fetus throughout gestation. In a normal pregnancy it facilitates the supply of oxygen and nutrients from the placenta via a single vein, in addition to the return of deoxygenated blood from the developing embryo or fetus via two umbilical arteries. Despite the major role it plays in the growth of the fetus, pathologies of the umbilical cord are poorly understood. In particular, variations in the cord geometry, which typically forms a helical arrangement, have been correlated with adverse outcomes in pregnancy. Cords exhibiting either abnormally low or high levels of coiling have been associated with pathological results including growth-restriction and fetal demise. Despite this, the methodology currently employed by clinicians to characterise umbilical pathologies can misdiagnose cords and is prone to error. In this talk a computational model of blood flow within rigid three-dimensional structures representative of the umbilical arteries will be presented. This study determined that the current characterization was unable to differentiate between cords which exhibited clinically distinguishable flow properties, including the cord pressure drop, which provides a measure of the loading on the fetal heart.
Modelling and optimisation of group dose-response challenge experiments
12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Media...
An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to dose-response challenge experiments for the bacteria Campylobacter jejuni in chickens.
All at sea with spectral analysis
11:10 Tue 19 Nov, 2013 :: Ingkarni Wardli Level 5 Room 5.56 :: A/Prof Andrew Metcalfe :: The University of Adelaide

The steady state response of a single degree of freedom damped linear stystem to a sinusoidal input is a sinusoidal function at the same frequency, but generally with a different amplitude and a phase shift. The analogous result for a random stationary input can be described in terms of input and response spectra and a transfer function description of the linear system. The practical use of this result is that the parameters of a linear system can be estimated from the input and response spectra, and the response spectrum can be predicted if the transfer function and input spectrum are known. I shall demonstrate these results with data from a small ship in the North Sea. The results from the sea trial raise the issue of non-linearity, and second order amplitude response functons are obtained using auto-regressive estimators. The possibility of using wavelets rather than spectra is consedred in the context of single degree of freedom linear systems. Everybody welcome to attend. Please not a change of venue - we will be in room 5.56
Holomorphic null curves and the conformal Calabi-Yau problem
12:10 Tue 28 Jan, 2014 :: Ingkarni Wardli B20 :: Prof Franc Forstneric :: University of Ljubljana

Media...
I shall describe how methods of complex analysis can be used to give new results on the conformal Calabi-Yau problem concerning the existence of bounded metrically complete minimal surfaces in real Euclidean 3-space R^3. We shall see in particular that every bordered Riemann surface admits a proper complete holomorphic immersion into the ball of C^2, and a proper complete embedding as a holomorphic null curve into the ball of C^3. Since the real and the imaginary parts of a holomorphic null curve in C^3 are conformally immersed minimal surfaces in R^3, we obtain a bounded complete conformal minimal immersion of any bordered Riemann surface into R^3. The main advantage of our methods, when compared to the existing ones in the literature, is that we do not need to change the conformal type of the Riemann surface. (Joint work with A. Alarcon, University of Granada.)
Hormander's estimate, some generalizations and new applications
12:10 Mon 17 Feb, 2014 :: Ingkarni Wardli B20 :: Prof Zbigniew Blocki :: Jagiellonian University

Lars Hormander proved his estimate for the d-bar equation in 1965. It is one the most important results in several complex variables (SCV). New applications have emerged recently, outside of SCV. We will present three of them: the Ohsawa-Takegoshi extension theorem with optimal constant, the one-dimensional Suita Conjecture, and Nazarov's approach to the Bourgain-Milman inequality from convex analysis.
The structuring role of chaotic stirring on pelagic ecosystems
11:10 Fri 28 Feb, 2014 :: B19 Ingkarni Wardli :: Dr Francesco d'Ovidio :: Universite Pierre et Marie Curie (Paris VI)

The open ocean upper layer is characterized by a complex transport dynamics occuring over different spatiotemporal scales. At the scale of 10-100 km - which covers the so called mesoscale and part of the submesoscale - in situ and remote sensing observations detect strong variability in physical and biogeochemical fields like sea surface temperature, salinity, and chlorophyll concentration. The calculation of Lyapunov exponent and other nonlinear diagnostics applied to the surface currents have allowed to show that an important part of this tracer variability is due to chaotic stirring. Here I will extend this analysis to marine ecosystems. For primary producers, I will show that stable and unstable manifolds of hyperbolic points embedded in the surface velocity field are able to structure the phytoplanktonic community in fluid dynamical niches of dominant types, where competition can locally occur during bloom events. By using data from tagged whales, frigatebirds, and elephant seals, I will also show that chaotic stirring affects the behaviour of higher trophic levels. In perspective, these relations between transport structures and marine ecosystems can be the base for a biodiversity index constructued from satellite information, and therefore able to monitor key aspects of the marine biodiversity and its temporal variability at the global scale.
The effects of pre-existing immunity
15:10 Fri 7 Mar, 2014 :: B.18 Ingkarni Wardli :: Associate Professor Jane Heffernan :: York University, Canada

Media...
Immune system memory, also called immunity, is gained as a result of primary infection or vaccination, and can be boosted after vaccination or secondary infections. Immunity is developed so that the immune system is primed to react and fight a pathogen earlier and more effectively in secondary infections. The effects of memory, however, on pathogen propagation in an individual host (in-host) and a population (epidemiology) are not well understood. Mathematical models of infectious diseases, employing dynamical systems, computer simulation and bifurcation analysis, can provide projections of pathogen propagation, show outcomes of infection and help inform public health interventions. In the Modelling Infection and Immunity (MI^2) lab, we develop and study biologically informed mathematical models of infectious diseases at both levels of infection, and combine these models into comprehensive multi-scale models so that the effects of individual immunity in a population can be determined. In this talk we will discuss some of the interesting mathematical phenomenon that arise in our models, and show how our results are directly applicable to what is known about the persistence of infectious diseases.
Viscoelastic fluids: mathematical challenges in determining their relaxation spectra
15:10 Mon 17 Mar, 2014 :: 5.58 Ingkarni Wardli :: Professor Russell Davies :: Cardiff University

Determining the relaxation spectrum of a viscoelastic fluid is a crucial step before a linear or nonlinear constitutive model can be applied. Information about the relaxation spectrum is obtained from simple flow experiments such as creep or oscillatory shear. However, the determination process involves the solution of one or more highly ill-posed inverse problems. The availability of only discrete data, the presence of noise in the data, as well as incomplete data, collectively make the problem very hard to solve. In this talk I will illustrate the mathematical challenges inherent in determining relaxation spectra, and also introduce the method of wavelet regularization which enables the representation of a continuous relaxation spectrum by a set of hyperbolic scaling functions.
A model for the BitCoin block chain that takes propagation delays into account
15:10 Fri 28 Mar, 2014 :: B.21 Ingkarni Wardli :: Professor Peter Taylor :: The University of Melbourne

Media...
Unlike cash transactions, most electronic transactions require the presence of a trusted authority to verify that the payer has sufficient funding to be able to make the transaction and to adjust the account balances of the payer and payee. In recent years BitCoin has been proposed as an "electronic equivalent of cash". The general idea is that transactions are verified in a coded form in a block chain, which is maintained by the community of participants. Problems can arise when the block chain splits: that is different participants have different versions of the block chain, something which can happen only when there are propagation delays, at least if all participants are behaving according to the protocol. In this talk I shall present a preliminary model for the splitting behaviour of the block chain. I shall then go on to perform a similar analysis for a situation where a group of participants has adopted a recently-proposed strategy for gaining a greater advantage from BitCoin processing than its combined computer power should be able to control.
Aircraft flight dynamics and stability
12:10 Mon 31 Mar, 2014 :: B.19 Ingkarni Wardli :: David Arnold :: University of Adelaide

Media...
In general, a stable plane is safer, more efficient and more comfortable than an unstable plane, however there are many design features that affect stability. In this talk I will discuss the dynamics of fixed wing aircraft in flight, with particular emphasis on stability. I will discuss some basic stability considerations, and how they influence aircraft design as well as some interesting modes of instability, and how they may be managed. Hopefully this talk will help to explain why planes to look the way they do.
Semiclassical restriction estimates
12:10 Fri 4 Apr, 2014 :: Ingkarni Wardli B20 :: Melissa Tacy :: University of Adelaide

Eigenfunctions of Hamiltonians arise naturally in the theory of quantum mechanics as stationary states of quantum systems. Their eigenvalues have an interpretation as the square root of E, where E is the energy of the system. We wish to better understand the high energy limit which defines the boundary between quantum and classical mechanics. In this talk I will focus on results regarding the restriction of eigenfunctions to lower dimensional subspaces, in particular to hypersurfaces. A convenient way to study such problems is to reframe them as problems in semiclassical analysis.
Bayesian Indirect Inference
12:10 Mon 14 Apr, 2014 :: B.19 Ingkarni Wardli :: Brock Hermans :: University of Adelaide

Media...
Bayesian likelihood-free methods saw the resurgence of Bayesian statistics through the use of computer sampling techniques. Since the resurgence, attention has focused on so-called 'summary statistics', that is, ways of summarising data that allow for accurate inference to be performed. However, it is not uncommon to find data sets in which the summary statistic approach is not sufficient. In this talk, I will be summarising some of the likelihood-free methods most commonly used (don't worry if you've never seen any Bayesian analysis before), as well as looking at Bayesian Indirect Likelihood, a new way of implementing Bayesian analysis which combines new inference methods with some of the older computational algorithms.
Network-based approaches to classification and biomarker identification in metastatic melanoma
15:10 Fri 2 May, 2014 :: B.21 Ingkarni Wardli :: Associate Professor Jean Yee Hwa Yang :: The University of Sydney

Media...
Finding prognostic markers has been a central question in much of current research in medicine and biology. In the last decade, approaches to prognostic prediction within a genomics setting are primarily based on changes in individual genes / protein. Very recently, however, network based approaches to prognostic prediction have begun to emerge which utilize interaction information between genes. This is based on the believe that large-scale molecular interaction networks are dynamic in nature and changes in these networks, rather than changes in individual genes/proteins, are often drivers of complex diseases such as cancer. In this talk, I use data from stage III melanoma patients provided by Prof. Mann from Melanoma Institute of Australia to discuss how network information can be utilize in the analysis of gene expression analysis to aid in biological interpretation. Here, we explore a number of novel and previously published network-based prediction methods, which we will then compare to the common single-gene and gene-set methods with the aim of identifying more biologically interpretable biomarkers in the form of networks.
Multiple Sclerosis and linear stability analysis
12:35 Mon 19 May, 2014 :: B.19 Ingkarni Wardli :: Saber Dini :: University of Adelaide

Media...
Multiple sclerosis (MS), is an inflammatory disease in which the immune system of the body attacks the myelin sheaths around axons in the brain and damages, or in other words, demyelinates the axons. Demyelination process can lead to scarring as well as a broad spectrum of signs and symptoms. Brain of vertebrates has a mechanism to restore the demyelination or Remyelinate the damaged area. Remyelination in the brain is accomplished by glial cells (servers of neurons). Glial cells should accumulate in the damaged areas of the brain to start the repairing process and this accumulation can be viewed as instability. Therefore, spatiotemporal linear stability analysis can be undertaken on the issue to investigate quantitative aspects of the remyelination process.
Group meeting
15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide

Meng Cao:: Multiscale modelling couples patches of nonlinear wave-like simulations :: Abstract: The multiscale gap-tooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gap-tooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gap-tooth scheme to the case of nonlinear microscale simulations of wave-like systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigen-analysis indicates that the resultant gap-tooth scheme empowers feasible computation of large scale simulations of wave-like dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dam-breaking waves by the gap-tooth scheme. Comparison between a gap-tooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gap-tooth scheme feasibly computes large scale wave-like dynamics with computational savings. Trent Mattner :: Coupled atmosphere-fire simulations of the Canberra 2003 bushfires using WRF-Sfire :: Abstract: The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fire-atmosphere-topography interactions that occurred, including lee-slope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fire-weather simulations of the Canberra fires using WRF-SFire. In these simulations, a fire-behaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics.
Software and protocol verification using Alloy
12:10 Mon 25 Aug, 2014 :: B.19 Ingkarni Wardli :: Dinesha Ranathunga :: University of Adelaide

Media...
Reliable software isn't achieved by trial and error. It requires tools to support verification. Alloy is a tool based on set theory that allows expression of a logic-based model of software or a protocol, and hence allows checking of this model. In this talk, I will cover its key concepts, language syntax and analysis features.
Neural Development of the Visual System: a laminar approach
15:10 Fri 29 Aug, 2014 :: N132 Engineering North :: Dr Andrew Oster :: Eastern Washington University

Media...
In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activity-dependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripe-like manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with "blobs" periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leading-order dynamics of the system.
Neural Development of the Visual System: a laminar approach
15:10 Fri 29 Aug, 2014 :: This talk will now be given as a School Colloquium :: Dr Andrew Oster :: Eastern Washington University

In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activity-dependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripe-like manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with 'blobs' periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leading-order dynamics of the system.
Inferring absolute population and recruitment of southern rock lobster using only catch and effort data
12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide

Media...
Abundance estimates from a data-limited version of catch survey analysis are compared to those from a novel one-parameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex data-rich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the data-limited methods is proposed as the most robust approach. A more statistically conventional error-in-variables approach may also be touched upon if enough time.
To Complex Analysis... and beyond!
12:10 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Brett Chenoweth :: University of Adelaide

Media...
In the undergraduate complex analysis course students learn about complex valued functions on domains in C (the complex plane). Several interesting and surprising results come about from this study. In my talk I will introduce a more general setting where complex analysis can be done, namely Riemann surfaces (complex manifolds of dimension 1). I will then prove that all non-compact Riemann surfaces are Stein; which loosely speaking means that their function theory is similar to that of C.
Nonlinear analysis over infinite dimensional spaces and its applications
12:10 Fri 6 Feb, 2015 :: Ingkarni Wardli B20 :: Tsuyoshi Kato :: Kyoto University

In this talk we develop moduli theory of holomorphic curves over infinite dimensional manifolds consisted by sequences of almost Kaehler manifolds. Under the assumption of high symmetry, we verify that many mechanisms of the standard moduli theory over closed symplectic manifolds also work over these infinite dimensional spaces. As an application, we study deformation theory of discrete groups acting on trees. There is a canonical way, up to conjugacy to embed such groups into the automorphism group over the infinite projective space. We verify that for some class of Hamiltonian functions, the deformed groups must be always asymptotically infinite.
Predicting pressure drops in pipelines due to pump trip events
12:10 Mon 2 Mar, 2015 :: Napier LG29 :: David Arnold :: University of Adelaide

Media...
Sunwater is a Queensland company that designs, builds and manages large-scale water infrastructure such as dams, weirs and pipelines. In this talk, I will discuss one of the aspects that is crucial in the design stage of long pipelines, the pipelines ability to withstand large pressure disturbances caused by pump trip events. A pump trip is a sudden, unplanned shutdown of a pump, which causes potentially destructive pressure waves to propagate through the pipe network. Accurate simulation of such events is time consuming and costly, so rules of thumb and intuition are used during initial planning and design of a pipeline project. I will discuss some simple mathematical models for pump trip events, show some results, and discuss how they could be used in the initial design process.
On the analyticity of CR-diffeomorphisms
12:10 Fri 13 Mar, 2015 :: Engineering North N132 :: Ilya Kossivskiy :: University of Vienna

One of the fundamental objects in several complex variables is CR-mappings. CR-mappings naturally occur in complex analysis as boundary values of mappings between domains, and as restrictions of holomorphic mappings onto real submanifolds. It was already observed by Cartan that smooth CR-diffeomorphisms between CR-submanifolds in C^N tend to be very regular, i.e., they are restrictions of holomorphic maps. However, in general smooth CR-mappings form a more restrictive class of mappings. Thus, since the inception of CR-geometry, the following general question has been of fundamental importance for the field: Are CR-equivalent real-analytic CR-structures also equivalent holomorphically? In joint work with Lamel, we answer this question in the negative, in any positive CR-dimension and CR-codimension. Our construction is based on a recent dynamical technique in CR-geometry, developed in my earlier work with Shafikov.
Groups acting on trees
12:10 Fri 10 Apr, 2015 :: Napier 144 :: Anitha Thillaisundaram :: Heinrich Heine University of Duesseldorf

From a geometric point of view, branch groups are groups acting spherically transitively on a spherically homogeneous rooted tree. The applications of branch groups reach out to analysis, geometry, combinatorics, and probability. The early construction of branch groups were the Grigorchuk group and the Gupta-Sidki p-groups. Among its many claims to fame, the Grigorchuk group was the first example of a group of intermediate growth (i.e. neither polynomial nor exponential). Here we consider a generalisation of the family of Grigorchuk-Gupta-Sidki groups, and we examine the restricted occurrence of their maximal subgroups.
Workshop on Geometric Quantisation
10:10 Mon 27 Jul, 2015 :: Level 7 conference room Ingkarni Wardli :: Michele Vergne, Weiping Zhang, Eckhard Meinrenken, Nigel Higson and many others

Media...
Geometric quantisation has been an increasingly active area since before the 1980s, with links to physics, symplectic geometry, representation theory, index theory, and differential geometry and geometric analysis in general. In addition to its relevance as a field on its own, it acts as a focal point for the interaction between all of these areas, which has yielded far-reaching and powerful results. This workshop features a large number of international speakers, who are all well-known for their work in (differential) geometry, representation theory and/or geometric analysis. This is a great opportunity for anyone interested in these areas to meet and learn from some of the top mathematicians in the world. Students are especially welcome. Registration is free.
Dynamics on Networks: The role of local dynamics and global networks on hypersynchronous neural activity
15:10 Fri 31 Jul, 2015 :: Ingkarni Wardli B21 :: Prof John Terry :: University of Exeter, UK

Media...

Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of mathematical modelling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit.

In the talk we introduce some of these concepts with application to epilepsy, introducing a dynamic network approach to study resting state EEG recordings from a cohort of 35 people with epilepsy and 40 adult controls. Using this framework we demonstrate a strongly significant difference between networks inferred from the background activity of people with epilepsy in comparison to normal controls. Our findings demonstrate that a mathematical model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which may ultimately enable a more appropriate mechanistic stratification of people with epilepsy leading to improved diagnostics and therapeutics.

Mathematical Modeling and Analysis of Active Suspensions
14:10 Mon 3 Aug, 2015 :: Napier 209 :: Professor Michael Shelley :: Courant Institute of Mathematical Sciences, New York University

Complex fluids that have a 'bio-active' microstructure, like suspensions of swimming bacteria or assemblies of immersed biopolymers and motor-proteins, are important examples of so-called active matter. These internally driven fluids can have strange mechanical properties, and show persistent activity-driven flows and self-organization. I will show how first-principles PDE models are derived through reciprocal coupling of the 'active stresses' generated by collective microscopic activity to the fluid's macroscopic flows. These PDEs have an interesting analytic structures and dynamics that agree qualitatively with experimental observations: they predict the transitions to flow instability and persistent mixing observed in bacterial suspensions, and for microtubule assemblies show the generation, propagation, and annihilation of disclination defects. I'll discuss how these models might be used to study yet more complex biophysical systems.
Pattern Formation in Nature
12:10 Mon 31 Aug, 2015 :: Benham Labs G10 :: Saber Dini :: University of Adelaide

Media...
Pattern formation is a ubiquitous process in nature: embryo development, animals skin pigmentation, etc. I will talk about how Alan Turing (the British genius known for the Turing Machine) explained pattern formation by linear stability analysis of reaction-diffusion systems.
IGA/AMSI Workshop -- Australia-Japan Geometry, Analysis and their Applications
09:00 Mon 19 Oct, 2015 :: Ingkarni Wardli Conference Room 7.15 (Level 7)

Media...
Interdisciplinary workshop between Australia and Japan on Geometry, Analysis and their Applications.
Use of epidemic models in optimal decision making
15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester

Media...
Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of household-stratified infection data. A design decision involves making a trade-off between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: cross-sectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameter-design space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the cross-sectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing data collection studies.
How predictable are you? Information and happiness in social media.
12:10 Mon 21 Mar, 2016 :: Ingkarni Wardli Conference Room 715 :: Dr Lewis Mitchell :: School of Mathematical Sciences

Media...
The explosion of ``Big Data'' coming from online social networks and the like has opened up the new field of ``computational social science'', which applies a quantitative lens to problems traditionally in the domain of psychologists, anthropologists and social scientists. What does it mean to be influential? How do ideas propagate amongst populations? Is happiness contagious? For the first time, mathematicians, statisticians, and computer scientists can provide insight into these and other questions. Using data from social networks such as Facebook and Twitter, I will give an overview of recent research trends in computational social science, describe some of my own work using techniques like sentiment analysis and information theory in this realm, and explain how you can get involved with this highly rewarding research field as well.
Geometric analysis of gap-labelling
12:10 Fri 8 Apr, 2016 :: Eng & Maths EM205 :: Mathai Varghese :: University of Adelaide

Media...
Using an earlier result, joint with Quillen, I will formulate a gap labelling conjecture for magnetic Schrodinger operators with smooth aperiodic potentials on Euclidean space. Results in low dimensions will be given, and the formulation of the same problem for certain non-Euclidean spaces will be given if time permits. This is ongoing joint work with Moulay Benameur.
Hot tube tau machine
15:10 Fri 15 Apr, 2016 :: B17 Ingkarni Wardli :: Dr Hayden Tronnolone :: University of Adelaide

Abstract: Microstructured optical fibres may be fabricated by first extruding molten material from a die to produce a macroscopic version of the final design, call a preform, and then stretching this to produce a fibre. In this talk I will demonstrate how to couple an existing model of the fluid flow during the extrusion stage to a basic model of the fluid temperature and present some preliminary conclusions. This work is still in progress and is being carried out in collaboration with Yvonne Stokes, Michael Chen and Jonathan Wylie. (+ Any items for group discussion)
Sard Theorem for the endpoint map in sub-Riemannian manifolds
12:10 Fri 29 Apr, 2016 :: Eng & Maths EM205 :: Alessandro Ottazzi :: University of New South Wales

Media...
Sub-Riemannian geometries occur in several areas of pure and applied mathematics, including harmonic analysis, PDEs, control theory, metric geometry, geometric group theory, and neurobiology. We introduce sub-Riemannian manifolds and give some examples. Therefore we discuss some of the open problems, and in particular we focus on the Sard Theorem for the endpoint map, which is related to the study of length minimizers. Finally, we consider some recent results obtained in collaboration with E. Le Donne, R. Montgomery, P. Pansu and D. Vittone.
Mathematical modelling of the immune response to influenza
15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne

Media...
The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.

We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of cross-reactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short inter-exposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit cross-reactive cellular adaptive immune responses. To account for inter-subject as well as inter-virus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.
Harmonic analysis of Hodge-Dirac operators
12:10 Fri 13 May, 2016 :: Eng & Maths EM205 :: Pierre Portal :: Australian National University

Media...
When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $D = d+d^*$ remain valid? Even in the model case of a perturbation of the euclidean metric on $\mathbb{R}^n$, this is a difficult question. For instance, the fact that the $L^2$ estimate $\|Du\|_2 \sim \|\sqrt{D^{2}}u\|_2$ remains valid for perturbed versions of $D$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $L^p$ estimates for the holomorphic functional calculus of $D$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU).
Harmonic Analysis in Rough Contexts
15:10 Fri 13 May, 2016 :: Engineering South S112 :: Dr Pierre Portal :: Australian National University

Media...
In recent years, perspectives on what constitutes the ``natural" framework within which to conduct various forms of mathematical analysis have shifted substantially. The common theme of these shifts can be described as a move towards roughness, i.e. the elimination of smoothness assumptions that had previously been considered fundamental. Examples include partial differential equations on domains with a boundary that is merely Lipschitz continuous, geometric analysis on metric measure spaces that do not have a smooth structure, and stochastic analysis of dynamical systems that have nowhere differentiable trajectories. In this talk, aimed at a general mathematical audience, I describe some of these shifts towards roughness, placing an emphasis on harmonic analysis, and on my own contributions. This includes the development of heat kernel methods in situations where such a kernel is merely a distribution, and applications to deterministic and stochastic partial differential equations.
Smooth mapping orbifolds
12:10 Fri 20 May, 2016 :: Eng & Maths EM205 :: David Roberts :: University of Adelaide

It is well-known that orbifolds can be represented by a special kind of Lie groupoid, namely those that are étale and proper. Lie groupoids themselves are one way of presenting certain nice differentiable stacks. In joint work with Ray Vozzo we have constructed a presentation of the mapping stack Hom(disc(M),X), for M a compact manifold and X a differentiable stack, by a Fréchet-Lie groupoid. This uses an apparently new result in global analysis about the map C^\infty(K_1,Y) \to C^\infty(K_2,Y) induced by restriction along the inclusion K_2 \to K_1, for certain compact K_1,K_2. We apply this to the case of X being an orbifold to show that the mapping stack is an infinite-dimensional orbifold groupoid. We also present results about mapping groupoids for bundle gerbes.
Time series analysis of paleo-climate proxies (a mathematical perspective)
15:10 Fri 27 May, 2016 :: Engineering South S112 :: Dr Thomas Stemler :: University of Western Australia

Media...
In this talk I will present the work my colleagues from the School of Earth and Environment (UWA), the "trans disciplinary methods" group of the Potsdam Institute for Climate Impact Research, Germany, and I did to explain the dynamics of the Australian-South East Asian monsoon system during the last couple of thousand years. From a time series perspective paleo-climate proxy series are more or less the monsters moving under your bed that wake you up in the middle of the night. The data is clearly non-stationary, non-uniform sampled in time and the influence of stochastic forcing or the level of measurement noise are more or less unknown. Given these undesirable properties almost all traditional time series analysis methods fail. I will highlight two methods that allow us to draw useful conclusions from the data sets. The first one uses Gaussian kernel methods to reconstruct climate networks from multiple proxies. The coupling relationships in these networks change over time and therefore can be used to infer which areas of the monsoon system dominate the complex dynamics of the whole system. Secondly I will introduce the transformation cost time series method, which allows us to detect changes in the dynamics of a non-uniform sampled time series. Unlike the frequently used interpolation approach, our new method does not corrupt the data and therefore avoids biases in any subsequence analysis. While I will again focus on paleo-climate proxies, the method can be used in other applied areas, where regular sampling is not possible.
Multi-scale modeling in biofluids and particle aggregation
15:10 Fri 17 Jun, 2016 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide

In today's seminar I will give 2 examples in mathematical biology which describes the multi-scale organization at 2 levels: the meso/micro level and the continuum/macro level. I will then detail suitable tools in statistical mechanics to link these different scales. The first problem arises in mathematical physiology: swelling-de-swelling mechanism of mucus, an ionic gel. Mucus is packaged inside cells at high concentration (volume fraction) and when released into the extracellular environment, it expands in volume by two orders of magnitude in a matter of seconds. This rapid expansion is due to the rapid exchange of calcium and sodium that changes the cross-linked structure of the mucus polymers, thereby causing it to swell. Modeling this problem involves a two-phase, polymer/solvent mixture theory (in the continuum level description), together with the chemistry of the polymer, its nearest neighbor interaction and its binding with the dissolved ionic species (in the micro-scale description). The problem is posed as a free-boundary problem, with the boundary conditions derived from a combination of variational principle and perturbation analysis. The dynamics of neutral gels and the equilibrium-states of the ionic gels are analyzed. In the second example, we numerically study the adhesion fragmentation dynamics of rigid, round particles clusters subject to a homogeneous shear flow. In the macro level we describe the dynamics of the number density of these cluster. The description in the micro-scale includes (a) binding/unbinding of the bonds attached on the particle surface, (b) bond torsion, (c) surface potential due to ionic medium, and (d) flow hydrodynamics due to shear flow.
Chern-Simons invariants of Seifert manifolds via Loop spaces
14:10 Tue 28 Jun, 2016 :: Ingkarni Wardli B17 :: Ryan Mickler :: Northeastern University

Over the past 30 years the Chern-Simons functional for connections on G-bundles over three-manfolds has lead to a deep understanding of the geometry of three-manfiolds, as well as knot invariants such as the Jones polynomial. Here we study this functional for three-manfolds that are topologically given as the total space of a principal circle bundle over a compact Riemann surface base, which are known as Seifert manifolds. We show that on such manifolds the Chern-Simons functional reduces to a particular gauge-theoretic functional on the 2d base, that describes a gauge theory of connections on an infinite dimensional bundle over this base with structure group given by the level-k affine central extension of the loop group LG. We show that this formulation gives a new understanding of results of Beasley-Witten on the computability of quantum Chern-Simons invariants of these manifolds as well as knot invariants for knots that wrap a single fiber of the circle bundle. A central tool in our analysis is the Caloron correspondence of Murray-Stevenson-Vozzo.
Product Hardy spaces associated to operators with heat kernel bounds on spaces of homogeneous type
12:10 Fri 19 Aug, 2016 :: Ingkarni Wardli B18 :: Lesley Ward :: University of South Australia

Media...
Much effort has been devoted to generalizing the Calder'on-Zygmund theory in harmonic analysis from Euclidean spaces to metric measure spaces, or spaces of homogeneous type. Here the underlying space R^n with Euclidean metric and Lebesgue measure is replaced by a set X with general metric or quasi-metric and a doubling measure. Further, one can replace the Laplacian operator that underpins the Calderon-Zygmund theory by more general operators L satisfying heat kernel estimates. I will present recent joint work with P. Chen, X.T. Duong, J. Li and L.X. Yan along these lines. We develop the theory of product Hardy spaces H^p_{L_1,L_2}(X_1 x X_2), for 1
A principled experimental design approach to big data analysis
15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology

Media...
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
SIR epidemics with stages of infection
12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles

Media...
This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.

We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semi-Markov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.

Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk.
Leavitt path algebras
12:10 Fri 2 Dec, 2016 :: Engineering & Math EM213 :: Roozbeh Hazrat :: Western Sydney University

Media...
From a directed graph one can generate an algebra which captures the movements along the graph. One such algebras are Leavitt path algebras. Despite being introduced only 10 years ago, Leavitt path algebras have arisen in a variety of different contexts as diverse as analysis, symbolic dynamics, noncommutative geometry and representation theory. In fact, Leavitt path algebras are algebraic counterpart to graph C*-algebras, a theory which has become an area of intensive research globally. There are strikingly parallel similarities between these two theories. Even more surprisingly, one cannot (yet) obtain the results in one theory as a consequence of the other; the statements look the same, however the techniques to prove them are quite different (as the names suggest, one uses Algebra and other Analysis). These all suggest that there might be a bridge between Algebra and Analysis yet to be uncovered. In this talk, we introduce Leavitt path algebras and try to classify them by means of (graded) Grothendieck groups. We will ask nice questions!
Segregation of particles in incompressible flows due to streamline topology and particle-boundary interaction
15:10 Fri 2 Dec, 2016 :: Ingkarni Wardli 5.57 :: Professor Hendrik C. Kuhlmann :: Institute of Fluid Mechanics and Heat Transfer, TU Wien, Vienna, Austria

Media...
The incompressible flow in a number of classical benchmark problems (e.g. lid-driven cavity, liquid bridge) undergoes an instability from a two-dimensional steady to a periodic three-dimensional flow, which is steady or in form of a traveling wave, if the Reynolds number is increased. In the supercritical regime chaotic as well as regular (quasi-periodic) streamlines can coexist for a range of Reynolds numbers. The spatial structures of the regular regions in three-dimensional Navier-Stokes flows has received relatively little attention, partly because of the high numerical effort required for resolving these structures. Particles whose density does not differ much from that of the liquid approximately follow the chaotic or regular streamlines in the bulk. Near the boundaries, however, their trajectories strongly deviate from the streamlines, in particular if the boundary (wall or free surface) is moving tangentially. As a result of this particle-boundary interaction particles can rapidly segregate and be attracted to periodic or quasi-periodic orbits, yielding particle accumulation structures (PAS). The mechanism of PAS will be explained and results from experiments and numerical modelling will be presented to demonstrate the generic character of the phenomenon.
What is index theory?
12:10 Tue 21 Mar, 2017 :: Inkgarni Wardli 5.57 :: Dr Peter Hochs :: School of Mathematical Sciences

Media...
Index theory is a link between topology, geometry and analysis. A typical theorem in index theory says that two numbers are equal: an analytic index and a topological index. The first theorem of this kind was the index theorem of Atiyah and Singer, which they proved in 1963. Index theorems have many applications in maths and physics. For example, they can be used to prove that a differential equation must have a solution. Also, they imply that the topology of a space like a sphere or a torus determines in what ways it can be curved. Topology is the study of geometric properties that do not change if we stretch or compress a shape without cutting or glueing. Curvature does change when we stretch something out, so it is surprising that topology can say anything about curvature. Index theory has many surprising consequences like this.
Minimal surfaces and complex analysis
12:10 Fri 24 Mar, 2017 :: Napier 209 :: Antonio Alarcon :: University of Granada

Media...
A surface in the Euclidean space R^3 is said to be minimal if it is locally area-minimizing, meaning that every point in the surface admits a compact neighborhood with the least area among all the surfaces with the same boundary. Although the origin of minimal surfaces is in physics, since they can be realized locally as soap films, this family of surfaces lies in the intersection of many fields of mathematics. In particular, complex analysis in one and several variables plays a fundamental role in the theory. In this lecture we will discuss the influence of complex analysis in the study of minimal surfaces.
K-types of tempered representations
12:10 Fri 7 Apr, 2017 :: Napier 209 :: Peter Hochs :: University of Adelaide

Media...
Tempered representations of a reductive Lie group G are the irreducible unitary representations one needs in the Plancherel decomposition of L^2(G). They are relevant to harmonic analysis because of this, and also occur in the Langlands classification of the larger class of admissible representations. If K in G is a maximal compact subgroup, then there is a considerable amount of information in the restriction of a tempered representation to K. In joint work with Yanli Song and Shilin Yu, we give a geometric expression for the decomposition of such a restriction into irreducibles. The multiplicities of these irreducibles are expressed as indices of Dirac operators on reduced spaces of a coadjoint orbit of G corresponding to the representation. These reduced spaces are Spin-c analogues of reduced spaces in symplectic geometry, defined in terms of moment maps that represent conserved quantities. This result involves a Spin-c version of the quantisation commutes with reduction principle for noncompact manifolds. For discrete series representations, this was done by Paradan in 2003.
Lagrangian transport in deterministic flows: from theory to experiment
16:10 Tue 16 May, 2017 :: Engineering North N132 :: Dr Michel Speetjens :: Eindhoven University of Technology

Transport of scalar quantities (e.g. chemical species, nutrients, heat) in deterministic flows is key to a wide range of phenomena and processes in industry and Nature. This encompasses length scales ranging from microns to hundreds of kilometres, and includes systems as diverse as viscous flows in the processing industry, micro-fluidic flows in labs-on-a-chip and porous media, large-scale geophysical and environmental flows, physiological and biological flows and even continuum descriptions of granular flows. Essential to the net transport of a scalar quantity is its advection by the fluid motion. The Lagrangian perspective (arguably) is the most natural way to investigate advection and leans on the fact that fluid trajectories are organized into coherent structures that geometrically determine the advective transport properties. Lagrangian transport is typically investigated via theoretical and computational studies and often concerns idealized flow situations that are difficult (or even impossible) to create in laboratory experiments. However, bridging the gap from theoretical and computational results to realistic flows is essential for their physical meaningfulness and practical relevance. This presentation highlights a number of fundamental Lagrangian transport phenomena and properties in both two-dimensional and three-dimensional flows and demonstrates their physical validity by way of representative and experimentally realizable flows.
Complex methods in real integral geometry
12:10 Fri 28 Jul, 2017 :: Engineering Sth S111 :: Mike Eastwood :: University of Adelaide

There are well-known analogies between holomorphic integral transforms such as the Penrose transform and real integral transforms such as the Radon, Funk, and John transforms. In fact, one can make a precise connection between them and hence use complex methods to establish results in the real setting. This talk will introduce some simple integral transforms and indicate how complex analysis may be applied.
On the fundamental of Rayleigh-Taylor instability and interfacial mixing
15:10 Fri 15 Sep, 2017 :: Ingkarni Wardli B17 :: Prof Snezhana Abarzhi :: University of Western Australia

Rayleigh-Taylor instability (RTI) develops when fluids of different densities are accelerated against their density gradient. Extensive interfacial mixing of the fluids ensues with time. Rayleigh-Taylor (RT) mixing controls a broad variety of processes in fluids, plasmas and materials, in high and low energy density regimes, at astrophysical and atomistic scales. Examples include formation of hot spot in inertial confinement, supernova explosion, stellar and planetary convection, flows in atmosphere and ocean, reactive and supercritical fluids, material transformation under impact and light-material interaction. In some of these cases (e.g. inertial confinement fusion) RT mixing should be tightly mitigated; in some others (e.g. turbulent combustion) it should be strongly enhanced. Understanding the fundamentals of RTI is crucial for achieving a better control of non-equilibrium processes in nature and technology. Traditionally, it was presumed that RTI leads to uncontrolled growth of small-scale imperfections, single-scale nonlinear dynamics, and extensive mixing that is similar to canonical turbulence. The recent success of the theory and experiments in fluids and plasmas suggests an alternative scenario of RTI evolution. It finds that the interface is necessary for RT mixing to accelerate, the acceleration effects are strong enough to suppress the development of turbulence, and the RT dynamics is multi-scale and has significant degree of order. This talk presents a physics-based consideration of fundamentals of RTI and RT mixing, and summarizes what is certain and what is not so certain in our knowledge of RTI. The focus question - How to influence the regularization process in RT mixing? We also discuss new opportunities for improvements of predictive modeling capabilities, physical description, and control of RT mixing in fluids, plasmas and materials.
On directions and operators
11:10 Wed 27 Sep, 2017 :: Engineering & Math EM213 :: Malabika Pramanik :: University of British Columbia

Media...
Many fundamental operators arising in harmonic analysis are governed by sets of directions that they are naturally associated with. This talk will survey a few representative results in this area, and report on some new developments.
Understanding burn injuries and first aid treatment using simple mathematical models
15:10 Fri 13 Oct, 2017 :: Ingkarni Wardli B17 :: Prof Mat Simpson :: Queensland University of Technology

Scald burns from accidental exposure to hot liquids are the most common cause of burn injury in children. Over 2000 children are treated for accidental burn injuries in Australia each year. Despite the frequency of these injuries, basic questions about the physics of heat transfer in living tissues remain unanswered. For example, skin thickness varies with age and anatomical location, yet our understanding of how tissue damage from thermal injury is influenced by skin thickness is surprisingly limited. In this presentation we will consider a series of porcine experiments to study heat transfer in living tissues. We consider burning the living tissue, as well as applying various first aid treatment strategies to cool the living tissue after injury. By calibrating solutions of simple mathematical models to match the experimental data we provide insight into how thermal energy propagates through living tissues, as well as exploring different first aid strategies. We conclude by outlining some of our current work that aims to produce more realistic mathematical models.
A multiscale approximation of a Cahn-Larche system with phase separation on the microscale
15:10 Thu 22 Feb, 2018 :: Ingkarni Wardli 5.57 :: Ms Lisa Reischmann :: University of Augsberg

We consider the process of phase separation of a binary system under the influence of mechanical deformation and we derive a mathematical multiscale model, which describes the evolving microstructure taking into account the elastic properties of the involved materials. Motivated by phase-separation processes observed in lipid monolayers in film-balance experiments, the starting point of the model is the Cahn-Hilliard equation coupled with the equations of linear elasticity, the so-called Cahn-Larche system. Owing to the fact that the mechanical deformation takes place on a macrosopic scale whereas the phase separation happens on a microscopic level, a multiscale approach is imperative. We assume the pattern of the evolving microstructure to have an intrinsic length scale associated with it, which, after nondimensionalisation, leads to a scaled model involving a small parameter epsilon>0, which is suitable for periodic-homogenisation techniques. For the full nonlinear problem the so-called homogenised problem is then obtained by letting epsilon tend to zero using the method of asymptotic expansion. Furthermore, we present a linearised Cahn-Larche system and use the method of two-scale convergence to obtain the associated limit problem, which turns out to have the same structure as in the nonlinear case, in a mathematically rigorous way. Properties of the limit model will be discussed.
Calculating optimal limits for transacting credit card customers
15:10 Fri 2 Mar, 2018 :: Horace Lamb 1022 :: Prof Peter Taylor :: University of Melbourne

Credit card users can roughly be divided into `transactors', who pay off their balance each month, and `revolvers', who maintain an outstanding balance, on which they pay substantial interest. In this talk, we focus on modelling the behaviour of an individual transactor customer. Our motivation is to calculate an optimal credit limit from the bank's point of view. This requires an expression for the expected outstanding balance at the end of a payment period. We establish a connection with the classical newsvendor model. Furthermore, we derive the Laplace transform of the outstanding balance, assuming that purchases are made according to a marked point process and that there is a simplified balance control policy which prevents all purchases in the rest of the payment period when the credit limit is exceeded. We then use the newsvendor model and our modified model to calculate bounds on the optimal credit limit for the more realistic balance control policy that accepts all purchases that do not exceed the limit. We illustrate our analysis using a compound Poisson process example and show that the optimal limit scales with the distribution of the purchasing process, while the probability of exceeding the optimal limit remains constant. Finally, we apply our model to some real credit card purchase data.
Radial Toeplitz operators on bounded symmetric domains
11:10 Fri 9 Mar, 2018 :: Lower Napier LG11 :: Raul Quiroga-Barranco :: CIMAT, Guanajuato, Mexico

Media...
The Bergman spaces on a complex domain are defined as the space of holomorphic square-integrable functions on the domain. These carry interesting structures both for analysis and representation theory in the case of bounded symmetric domains. On the other hand, these spaces have some bounded operators obtained as the composition of a multiplier operator and a projection. These operators are highly noncommuting between each other. However, there exist large commutative C*-algebras generated by some of these Toeplitz operators very much related to Lie groups. I will construct an example of such C*-algebras and provide a fairly explicit simultaneous diagonalization of the generating Toeplitz operators.
Equivariant Index, Traces and Representation Theory
11:10 Fri 10 Aug, 2018 :: Barr Smith South Polygon Lecture theatre :: Hang Wang :: University of Adelaide

K-theory of C*-algebras associated to a semisimple Lie group can be understood both from the geometric point of view via Baum-Connes assembly map and from the representation theoretic point of view via harmonic analysis of Lie groups. A K-theory generator can be viewed as the equivariant index of some Dirac operator, but also interpreted as a (family of) representation(s) parametrised by the noncompact abelian part in the Levi component of a cuspidal parabolic subgroup. Applying orbital traces to the K-theory group, we obtain the equivariant index as a fixed point formula which, for each K-theory generators for (limit of) discrete series, recovers Harish-Chandra’s character formula on the representation theory side. This is a noncompact analogue of Atiyah-Segal-Singer fixed point theorem in relation to the Weyl character formula. This is joint work with Peter Hochs.
Topological Data Analysis
15:10 Fri 31 Aug, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University

Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape - the way the data are distributed - permits high-level reasoning and modelling of the processes that created this data. The 0-th order aspect of shape is the number pieces: "connected components" to a topologist; "clustering" to a statistician. Higher-order topological aspects of shape are holes, quantified as "non-bounding cycles" in homology theory. These signal the existence of some type of constraint on the data-generating process. Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of "noisy" topological structure. Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection.
Some advances in the formulation of analytical methods for linear and nonlinear dynamics
15:10 Tue 20 Nov, 2018 :: EMG07 :: Dr Vladislav Sorokin :: University of Auckland

In the modern engineering, it is often necessary to solve problems involving strong parametric excitation and (or) strong nonlinearity. Dynamics of micro- and nanoscale electro-mechanical systems, wave propagation in structures made of corrugated composite materials are just examples of those. Numerical methods, although able to predict systems behavior for specific sets of parameters, fail to provide an insight into underlying physics. On the other hand, conventional analytical methods impose severe restrictions on the problem parameters space and (or) on types of the solutions. Thus, the quest for advanced tools to deal with linear and nonlinear structural dynamics still continues, and the lecture is concerned with an advanced formulation of an analytical method. The principal novelty aspect is that the presence of a small parameter in governing equations is not requested, so that dynamic problems involving strong parametric excitation and (or) strong nonlinearity can be considered. Another advantage of the method is that it is free from conventional restrictions on the excitation frequency spectrum and applicable for problems involving combined multiple parametric and (or) direct excitations with incommensurate frequencies, essential for some applications. A use of the method will be illustrated in several examples, including analysis of the effects of corrugation shapes on dispersion relation and frequency band-gaps of structures and dynamics of nonlinear parametric amplifiers.

News matching "Design and analysis of microarray and other experi"

ARC Grant successes
Congratulations to Tony Roberts, Charles Pearce, Robert Elliot, Andrew Metcalfe and all their collaborators on their success in the current round of ARC grants. The projects are "Development of innovative technologies for oil production based on the advanced theory of suspension flows in porous media" (Tony Roberts et al.), "Perturbation and approximation methods for linear operators with applications to train control, water resource management and evolution of physical systems" (Charles Pearce et al.), "Risk Measures and Management in Finance and Actuarial Science Under Regime-Switching Models" (Robert Elliott et al.) and "A new flood design methodology for a variable and changing climate" (Andrew Metcalfe et al.) Posted Mon 26 Oct 09.
ARC Grant successes
The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes. Congratulations to the following staff for their success in the Discovery Project scheme: Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years; Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years; A/Prof Matt Roughan, Dr Walter Willinger, Internet traffic-matrix synthesis, $290,000 over three years; Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years; Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years; Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrix-analytic methods with applications, $600,000 over 3 years. Congratulations to the following staff for their success in the Linkage Project scheme: Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years; Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains - demonstration, evaluation, integration, $540,000 over 3 years Posted Fri 29 Oct 10.
New Fellow of the Australian Academy of Science
Professor Mathai Varghese, Professor of Pure Mathematics and ARC Professorial Fellow within the School of Mathematical Sciences, was elected to the Australian Academy of Science. Professor Varghese's citation read "for his distinguished for his work in geometric analysis involving the topology of manifolds, including the Mathai-Quillen formalism in topological field theory.". Posted Tue 30 Nov 10.
ARC Grant Success
Congratulations to the following staff who were successful in securing funding from the Australian Research Council Discovery Projects Scheme. Associate Professor Finnur Larusson awarded $270,000 for his project Flexibility and symmetry in complex geometry; Dr Thomas Leistner, awarded $303,464 for his project Holonomy groups in Lorentzian geometry, Professor Michael Murray Murray and Dr Daniel Stevenson (Glasgow), awarded $270,000 for their project Bundle gerbes: generalisations and applications; Professor Mathai Varghese, awarded $105,000 for his project Advances in index theory and Prof Anthony Roberts and Professor Ioannis Kevrekidis (Princeton) awarded $330,000 for their project Accurate modelling of large multiscale dynamical systems for engineering and scientific simulation and analysis Posted Tue 8 Nov 11.
Elder Professor Mathai Varghese Awarded Australian Laureate Fellowship
Professor Mathai Varghese, Elder Professor of Mathematics in the School of Mathematical Sciences, has been awarded an Australian Laureate Fellowship worth $1.64 million to advance Index Theory and its applications. The project is expected to enhance Australia’s position at the forefront of international research in geometric analysis. Posted Thu 15 Jun 17.

More information...

Elder Professor Mathai Varghese Awarded Australian Laureate Fellowship
Professor Mathai Varghese, Elder Professor of Mathematics in the School of Mathematical Sciences, has been awarded an Australian Laureate Fellowship worth $1.64 million to advance Index Theory and its applications. The project will enhance Australia's position at the forefront of international research in geometric analysis. Posted Thu 15 Jun 17.

More information...

Publications matching "Design and analysis of microarray and other experi"

Publications
Inversion of analytically perturbed linear operators that are singular at the origin
Howlett, P; Avrachenkov, K; Pearce, Charles; Ejov, V, Journal of Mathematical Analysis and Applications 353 (68–84) 2009
Optimal designs for 2-color microarray experiments
Sanchez, Penny Susan; Glonek, Garique, Biostatistics 10 (561–574) 2009
Portfolio risk minimization and differential games
Elliott, Robert; Siu, T, Nonlinear Analysis-Theory Methods & Applications In Press (–) 2009
Schlicht Envelopes of Holomorphy and Foliations by Lines
Larusson, Finnur; Shafikov, R, Journal of Geometric Analysis 19 (373–389) 2009
A total probability approach to flood frequency analysis in tidal river reaches
Need, Steven; Lambert, Martin; Metcalfe, Andrew, World Environmental and Water Resources Congress 2008 Ahupua'a, Honolulu 12/05/08
Quantitative analysis ofincorrectly-configured bogon-filter detection
Arnold, Jonathan; Maennel, Olaf; Flavel, Ashley; McMahon, Jeremy; Roughan, Matthew, Australasian Telecommunication Networks and Applications Conference, Adelaide 07/12/08
A mixer design for the pigtail braid
Binder, Benjamin; Cox, Stephen, Fluid Dynamics Research 40 (34–44) 2008
A non-linear filter
Elliott, Robert; Leung, H; Deng, J, Stochastic Analysis and Applications 26 (856–862) 2008
Frequency analysis of rainfall and streamflow extremes accounting for seasonal and climatic partitions
Leonard, Michael; Metcalfe, Andrew; Lambert, Martin, Journal of Hydrology 348 (135–147) 2008
Gene profiling for determining pluripotent genes in a time course microarray experiment
Tuke, Simon; Glonek, Garique; Solomon, Patricia, Biostatistics 10 (80–93) 2008
Nonlinear transient heat conduction problems for a class of inhomogeneous anisotropic materials by BEM
Azis, Mohammad; Clements, David, Engineering Analysis With Boundary Elements 32 (1054–1060) 2008
Optimization of a shot peening process
Petit-Renaud, F; Evans, Justin; Metcalfe, Andrew; Shaw, B, Institution of Mechanical Engineers. Proceedings. Part L: Journal of Materials: Design and Applications 222 (277–289) 2008
Internet traffic and multiresolution analysis
Zhang, Y; Ge, Z; Diggavi, S; Mao, Z; Roughan, Matthew; Vaishampayan, V; Willinger, W; Zhang, Y, chapter in Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz (Institute of Mathematical Statistic) 215–234, 2008
An architecture for IEEE 802.16 MAC scheduler design
Tang, Tze; Green, D; Rumsewicz, Michael; Bean, Nigel, 007 15th IEEE International Conference on Networks, Adelaide, Australia 19/11/07
Aspects of Dirac operators in analysis
Eastwood, Michael; Ryan, J, Milan Journal of Mathematics 75 (91–116) 2007
Gene expression analysis of multiple gastrointestinal regions reveals activation of common cell regulatory pathways following cytotoxic chemotherapy
Bowen, Joanne; Gibson, Rachel; Tsykin, Anna; Stringer, Andrea Marie; Logan, Richard; Keefe, Dorothy, International Journal of Cancer 121 (1847–1856) 2007
Microarray gene expression profiling of osteoarthritic bone suggests altered bone remodelling, WNT and transforming growth factor-beta/bone morphogenic protein signalling
Hopwood, Blair; Tsykin, Anna; Findlay, David; Fazzalari, Nicola, Arthritis Research & Therapy 9 (WWW 1–WWW 21) 2007
Nonclassical symmetry solutions for reaction-diffusion equations with explicity spatial dependence
Hajek, Bronwyn; Edwards, M; Broadbridge, P; Williams, G, Nonlinear Analysis-Theory Methods & Applications 67 (2541–2552) 2007
Optimal multilinear estimation of a random vector under constraints of casualty and limited memory
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Computational Statistics & Data Analysis 52 (869–878) 2007
Statistics in review; Part 2: Generalised linear models, time-to-event and time-series analysis, evidence synthesis and clinical trials
Moran, John; Solomon, Patricia, Critical care and Resuscitation 9 (187–197) 2007
The solution of a free boundary problem related to environmental management systems
Elliott, Robert; Filinkov, Alexei, Stochastic Analysis and Applications 25 (1189–1202) 2007
Experimental Design and Analysis of Microarray Data
Wilson, C; Tsykin, Anna; Wilkinson, Christopher; Abbott, C, chapter in Bioinformatics (Elsevier Ltd) 1–36, 2006
A Markov analysis of social learning and adaptation
Wheeler, Scott; Bean, Nigel; Gaffney, Janice; Taylor, Peter, Journal of Evolutionary Economics 16 (299–319) 2006
Data-recursive smoother formulae for partially observed discrete-time Markov chains
Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006
Mathematical analysis of an extended mumford-shah model for image segmentation
Tao, Trevor; Crisp, David; Van Der Hoek, John, Journal of Mathematical Imaging and Vision 24 (327–340) 2006
Methodology in meta-analysis: a study from critical care meta-analytic practice
Moran, John; Solomon, Patricia; Warn, D, Health Services and Outcomes Research Methodology 5 (207–226) 2006
On the indentation of an inhomogeneous anisotropic elastic material by multiple straight rigid punches
Clements, David; Ang, W, Engineering Analysis With Boundary Elements 30 (284–291) 2006
Stochastic volatility model with filtering
Elliott, Robert; MIao, H, Stochastic Analysis and Applications 24 (661–683) 2006
The influence of urban land-use on non-motorised transport casualties
Wedagama, D; Bird, R; Metcalfe, Andrew, Accident Analysis and Prevention 38 (1049–1057) 2006
Three-dimensional flow due to a microcantilever oscillating near a wall: an unsteady slender-body analysis
Clarke, Richard; Jensen, O; Billingham, J; Williams, P, Proceedings of the Royal Society of London Series A-Mathematical Physical and Engineering Sciences 462 (913–933) 2006
Analysis of a practical control policy for water storage in two connected dams
Howlett, P; Piantadosi, J; Pearce, Charles, chapter in Continuous optimization: Current trends and modern applications (Springer) 435–450, 2005
Diversity sensitivity and multimodal Bayesian statistical analysis by relative entropy
Leipnik, R; Pearce, Charles, The ANZIAM Journal 47 (277–287) 2005
Elastic plastic analysis of shallow shells - A new approach
Mazumdar, Jagan; Ghosh, Abir; Hewitt, J; Bhattacharya, P, The ANZIAM Journal 47 (121–130) 2005
Hidden Markov chain filtering for a jump diffusion model
Wu, P; Elliott, Robert, Stochastic Analysis and Applications 23 (153–163) 2005
Hidden Markov filter estimation of the occurrence time of an event in a financial market
Elliott, Robert; Tsoi, A, Stochastic Analysis and Applications 23 (1165–1177) 2005
Meta-analysis of controlled trials of ventilator therapy in acute lung injury and acute respiratory distress syndrome: an alternative perspective
Moran, John; Bersten, A; Solomon, Patricia, Intensive Care Medicine 31 (227–235) 2005
Smoothly parameterized ech cohomology of complex manifolds
Bailey, T; Eastwood, Michael; Gindikin, S, Journal of Geometric Analysis 15 (9–23) 2005
Image processing of finite size rat retinal ganglion cells using multifractal and local connected fractal analysis
Jelinek, H; Cornforth, D; Roberts, Anthony John; Landini, G; Bourke, P; Iorio, A, chapter in AI 2004: Advances in Artificial Intelligence (Springer) 961–966, 2005
Network-wide inter-domain routing policies: Design and realization
Maennel, Olaf; Feldmann, A; Reiser, C; Volk, R; Bohm, H, Nanog34, Seatlle, WA USA 15/05/05
On the analysis of a case-control study with differential measurement error
Glonek, Garique, 20th International Workshop on Statistical Modelling, Sydney, Australia 10/07/05
Dixmier traces as singular symmetric functionals and applications to measurable operators
Lord, Steven; Sedaev, A; Sukochev, F, Journal of Functional Analysis 224 (72–106) 2005
Filtering, smoothing and M-ary detection with discrete time poisson observations
Elliott, Robert; Malcolm, William; Aggoun, L, Stochastic Analysis and Applications 23 (939–952) 2005
Finite-dimensional filtering and control for continuous-time nonlinear systems
Elliott, Robert; Aggoun, L; Benmerzouga, A, Stochastic Analysis and Applications 22 (499–505) 2005
Nonlinear analysis of rubber-based polymeric materials with thermal relaxation models
Melnik, R; Strunin, D; Roberts, Anthony John, Numerical Heat Transfer Part A-Applications 47 (549–569) 2005
Smoothly parameterized Cech cohomology of complex manifolds
Bailey, T; Eastwood, Michael; Gindikin, S, Journal of Geometric Analysis 15 (9–23) 2005
A deterministic discretisation-step upper bound for state estimation via Clark transformations
Malcolm, William; Elliott, Robert; Van Der Hoek, John, J.A.M.S.A. Journal of Applied Mathematics and Stochastic Analysis 2004 (371–384) 2004
A sufficient condition for the uniform exponential stability of time-varying systems with noise
Grammel, G; Maizurna, Isna, Nonlinear Analysis-Theory Methods & Applications 56 (951–960) 2004
Factorial and time course designs for cDNA microarray experiments
Glonek, Garique; Solomon, Patricia, Biostatistics 5 (89–111) 2004
Gerbes, Clifford Modules and the index theorem
Murray, Michael; Singer, Michael, Annals of Global Analysis and Geometry 26 (355–367) 2004
Modern approach of design of welded components subjected to fatigue loading
Ghosh, Abir, Journal of Structural Engineering-ASCE 130 (812–820) 2004
Reactions to genetically modified food crops and how perception of risks and benefits influences consumers' information gathering
Wilson, Carlene; Evans, G; Leppard, Phillip; Syrette, J, Risk Analysis 24 (1311–1321) 2004
A dual-reciprocity boundary element method for a class of elliptic boundary value problems for non-homogenous anisotropic media
Ang, W; Clements, David; Vahdati, N, Engineering Analysis With Boundary Elements 27 (49–55) 2003
Compact Khler surfaces with trivial canonical bundle
Buchdahl, Nicholas, Annals of Global Analysis and Geometry 23 (189–204) 2003
Complex analysis and the Funk transform
Bailey, T; Eastwood, Michael; Gover, A; Mason, L, Journal of the Korean Mathematical Society 40 (577–593) 2003
Exponential stability and partial averaging
Grammel, G; Maizurna, Isna, Journal of Mathematical Analysis and Applications 283 (276–286) 2003
Hyperbolic monopoles and holomorphic spheres
Murray, Michael; Norbury, Paul; Singer, Michael, Annals of Global Analysis and Geometry 23 (101–128) 2003
Method of best successive approximations for nonlinear operators
Torokhti, Anatoli; Howlett, P; Pearce, Charles, Journal of Computational Analysis and Applications 5 (299–312) 2003
On nonlinear operator approximation with preassigned accuracy
Howlett, P; Pearce, Charles; Torokhti, Anatoli, Journal of Computational Analysis and Applications 5 (273–297) 2003
Rumours, epidemics, and processes of mass action: Synthesis and analysis
Dickinson, Rowland; Pearce, Charles, Mathematical and Computer Modelling 38 (1157–1167) 2003
Resampling-based multiple testing for microarray data analysis (Invited discussion of paper by Ge, Dudoit and Speed)
Glonek, Garique; Solomon, Patricia, Test 12 (50–53) 2003
Some aspects of the design and monitoring of clinical trials
Moran, John; Solomon, Patricia, Critical care and Resuscitation 5 (137–146) 2003
A concavity result for network design problems
Ketabi, Saeedeh; Salzborn, Franz, Journal of Global Optimization 24 (79–88) 2002
An analysis of noise enhanced information transmission in an array of comparators
McDonnell, Mark; Abbott, Derek; Pearce, Charles, Microelectronics Journal 33 (1079–1089) 2002
Approximating spectral invariants of Harper operators on graphs
Varghese, Mathai; Yates, Stuart, Journal of Functional Analysis 188 (111–136) 2002
Mathematical methods for spatially cohesive reserve design
McDonnell, Mark; Possingham, Hugh; Ball, Ian; Cousins, Elizabeth, Environmental Modeling & Assessment 7 (107–114) 2002
Portfolio optimization, hidden Markov models, and technical analysis of P&F-charts
Elliott, Robert; Hinz, J, International Journal of Theoretical and Applied Finance 5 (385–399) 2002
An edge-of-the-wedge theorum for hypersurface CR functions
Eastwood, Michael; Graham, C, Journal of Geometric Analysis 11 (589–602) 2001
Csiszr f-divergence, Ostrowski's inequality and mutual information
Dragomir, S; Gluscevic, Vido; Pearce, Charles, Nonlinear Analysis-Theory Methods & Applications 47 (2375–2386) 2001
Equivariant Seiberg-Witten Floer homology
Marcolli, M; Wang, Bai-Ling, Communications in Analysis and Geometry 9 (451–639) 2001
On best-approximation problems for nonlinear operators
Howlett, P; Pearce, Charles; Torokhti, Anatoli, Nonlinear Functional Analysis and Applications 6 (351–368) 2001
On the extended reversed Meir inequality
Guljas, B; Pearce, Charles; Pecaric, Josip, Journal of Computational Analysis and Applications 3 (243–247) 2001
The Mx/G/1 queue with queue length dependent service times
Choi, B; Kim, Y; Shin, Y; Pearce, Charles, J.A.M.S.A. Journal of Applied Mathematics and Stochastic Analysis 14 (399–419) 2001
The modelling and numerical simulation of causal non-linear systems
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Nonlinear Analysis-Theory Methods & Applications 47 (5559–5572) 2001
Best estimators of second degree for data analysis
Howlett, P; Pearce, Charles; Torokhti, Anatoli, ASMDA 2001, Compiegne, France 12/06/01
A continuous time kronecker's lemma and martingale convergence
Elliott, Robert, Stochastic Analysis and Applications 19 (433–437) 2001
Statistical analysis of medical data: New developments - Book review
Solomon, Patricia, Biometrics 57 (327–328) 2001
Meta-analysis, overviews and publication bias
Solomon, Patricia; Hutton, Jonathon, Statistical Methods in Medical Research 10 (245–250) 2001
Spectral analysis of heart sounds and vibration analysis of heart valves
Mazumdar, Jagan, EMAC 2000, RMIT University, Melbourne, Australia 10/09/00
A martingale analysis of hysteretic overload control
Roughan, Matthew; Pearce, Charles, Advances in Performance Analysis 3 (1–30) 2000
A note on higher cohomology groups of Khler quotients
Wu, Siye, Annals of Global Analysis and Geometry 18 (569–576) 2000
Local Constraints on Einstein-Weyl geometries: The 3-dimensional case
Eastwood, Michael; Tod, K, Annals of Global Analysis and Geometry 18 (1–27) 2000
Numerical design tools for thermal replication of optical-quality surfaces
Stokes, Yvonne, Computers & Fluids 29 (401–414) 2000
On Anastassiou's generalizations of the Ostrowski inequality and related results
Pearce, Charles; Pecaric, Josip, Journal of Computational Analysis and Applications 2 (215–276) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.