January
2020  M  T  W  T  F  S  S    1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31          

Search the School of Mathematical SciencesPeople matching "American option pricing in a Markov chain market m"Events matching "American option pricing in a Markov chain market m" 
A Bivariate Zeroinflated Poisson Regression Model and application to some Dental Epidemiological data 14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul
Data in the form of paired (pretreatment, posttreatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zeroinflated bivariate Poisson regression (ZIBPR) model for the paired (pretreatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zeroinflated Poisson regression (ZIPR) model of the posttreatment count with the pretreatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zeroinflated Poisson regression model in which the pretreatment DMFT index is taken to be a covariate 

Alberta Power Prices 15:10 Fri 9 Mar, 2007 :: G08 Mathematics Building University of Adelaide :: Prof. Robert Elliott
Media...The pricing of electricity involves several interesting features. Apart from daily, weekly and seasonal fluctuations, power prices often exhibit large spikes. To some extent this is because electricity cannot be stored. We propose a model for power prices in the Alberta market. This involves a diffusion process modified by a factor related to a Markov chain which describes the number of large generators on line. The model is calibrated and future contracts priced. 

Identifying the source of photographic images by analysis of JPEG quantization artifacts 15:10 Fri 27 Apr, 2007 :: G08 Mathematics Building University of Adelaide :: Dr Matthew Sorell
Media...In a forensic context, digital photographs are becoming more common as sources of evidence in criminal and civil matters. Questions that arise include identifying the make and model of a camera to assist in the gathering of physical evidence; matching photographs to a particular camera through the cameraâs unique characteristics; and determining the integrity of a digital image, including whether the image contains steganographic information. From a digital file perspective, there is also the question of whether metadata has been deliberately modified to mislead the investigator, and in the case of multiple images, whether a timeline can be established from the various timestamps within the file, imposed by the operating system or determined by other image characteristics. This talk is concerned specifically with techniques to identify the make, model series and particular source camera model given a digital image. We exploit particular characteristics of the cameraâs JPEG coder to demonstrate that such identification is possible, and that even when an image has subsequently been reprocessed, there are often sufficient residual characteristics of the original coding to at least narrow down the possible camera models of interest. 

Flooding in the Sundarbans 15:10 Fri 18 May, 2007 :: G08 Mathematics Building University of Adelaide :: Steve Need
Media...The Sunderbans is a region of deltaic isles formed in the mouth of the Ganges
River on the border between India and Bangladesh. As the largest mangrove
forest in the world it is a world heritage site, however it is also home to
several remote communities who have long inhabited some regions. Many of the
inhabited islands are lowlying and are particularly vulnerable to flooding, a
major hazard of living in the region. Determining suitable levels of
protection to be provided to these communities relies upon accurate assessment
of the flood risk facing these communities. Only recently the Indian
Government commissioned a study into flood risk in the Sunderbans with a view
to determine where flood protection needed to be upgraded.
Flooding due to rainfall is limited due to the relatively small catchment sizes,
so the primary causes of flooding in the Sunderbans are unnaturally high tides,
tropical cyclones (which regularly sweep through the bay of Bengal) or some
combination of the two. Due to the link between tidal anomaly and drops in local
barometric pressure, the two causes of flooding may be highly correlated. I
propose stochastic methods for analysing the flood risk and present the early work
of a case study which shows the direction of investigation. The strategy involves
linking several components; a stochastic approximation to a hydraulic flood
routing model, FARIMA and GARCH models for storm surge and a stochastic model for
cyclone occurrence and tracking. The methods suggested are general and should have
applications in other cyclone affected regions. 

Likelihood inference for a problem in particle physics 15:10 Fri 27 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Anthony Davison
The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the socalled Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice. 

Insights into the development of the enteric nervous system and Hirschsprung's disease 15:10 Fri 24 Aug, 2007 :: G08 Mathematics building University of Adelaide :: Assoc. Prof. Kerry Landman :: Department of Mathematics and Statistics, University of Melbourne
During the development of the enteric nervous system, neural crest (NC) cells must first migrate into and colonise the entire gut from stomach to anal end. The migratory precursor NC cells change type and differentiate into neurons and glia cells. These cells form the enteric nervous system, which gives rise to normal gut function and peristaltic contraction. Failure of the NC cells to invade the whole gut results in a lack of neurons in a length of the terminal intestine. This potentially fatal condition, marked by intractable constipation, is called Hirschsprung's Disease. The interplay between cell migration, cell proliferation and embryonic gut growth are important to the success of the NC cell colonisation process.
Multiscale models are needed in order to model the different spatiotemporal scales of the NC invasion. For example, the NC invasion wave moves into unoccupied regions of the gut with a wave speed of around 40 microns per hour. New timelapse techniques have shown that there is a weblike network structure within the invasion wave. Furthermore, within this network, individual cell trajectories vary considerably.
We have developed a populationscale model for basic rules governing NC cell invasive behaviour incorporating the important mechanisms. The model predictions were tested experimentally. Mathematical and experimental results agreed. The results provide an understanding of why many of the genes implicated in Hirschsprung's Disease influence NC population size. Our recently developed individual cellbased model also produces an invasion wave with a welldefined wave speed; however, in addition Individual cell trajectories within the invasion wave can be extracted. Further challenges in modeling the various scales of the developmental system will be discussed. 

Moderated Statistical Tests for Digital Gene Expression Technologies 15:10 Fri 19 Oct, 2007 :: G04 Napier Building University of Adelaide :: Dr Gordon Smyth :: Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia
Digital gene expression (DGE) technologies measure gene expression by counting sequence tags. They are sensitive technologies for measuring gene expression on a genomic scale, without the need for prior knowledge of the genome sequence. As the cost of DNA sequencing decreases, the number of DGE datasets is expected to grow dramatically. Various tests of differential expression have been proposed for replicated DGE data using overdispersed binomial or Poisson models for the counts, but none of the these are usable when the number of replicates is very small. We develop tests using the negative binomial distribution to model overdispersion relative to the Poisson, and use conditional weighted likelihood to moderate the level of overdispersion across genes. A heuristic empirical Bayes algorithm is developed which is applicable to very general likelihood estimation contexts. Not only is our strategy applicable even with the smallest number of replicates, but it also proves to be more powerful than previous strategies when more replicates are available. The methodology is applicable to other counting technologies, such as proteomic spectral counts.


Adaptive Fast Convergence  Towards Optimal Reconstruction Guarantees for Phylogenetic Trees 16:00 Tue 1 Apr, 2008 :: School Board Room :: Schlomo Moran :: Computer Science Department, Technion, Haifa, Israel
One of the central challenges in phylogenetics is to be able to reliably resolve as much of the topology of the evolutionary tree from short taxonsequences. In the past decade much attention has been focused on studying fast converging reconstruction algorithms, which guarantee (w.h.p) correct reconstruction of the entire tree from sequences of nearminimal length (assuming some accepted model of sequence evolution along the tree). The major drawback of these methods is that when the sequences are too short to correctly reconstruct the tree in its entirety, they do not provide any reconstruction guarantee for sufficiently long edges. Specifically, the presence of some very short edges in the model tree may prevent these algorithms from reconstructing even edges of moderate length.
In this talk we present a stronger reconstruction guarantee called "adaptive fast convergence", which provides guarantees for the correct reconstruction of all sufficiently long edges of the original tree. We then present a general technique, which (unlike previous reconstruction techniques) employs dynamic edgecontraction during the reconstruction of the tree. We conclude by demonstrating how this technique is used to achieve adaptive fast convergence. 

Global and Local stationary modelling in finance: Theory and empirical evidence 14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 PantheonSorbonne
To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.
Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.
Now nonstationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This nonstationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).
Thus, using stationary unconditional moments suggest a global stationarity for the model, but using nonstationary unconditional moments or nonstationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.
The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.
1. What kinds of nonstationarity affect the major financial and economic data sets? How to detect them?
2. Local and global stationarities: How are they defined?
3. What is the impact of evidence of nonstationarity on the statistics computed from the global non stationary data sets?
4. How can we analyze data sets in the nonstationary global framework? Does the asymptotic theory work in nonstationary framework?
5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?
These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.


Computational Methods for Phase Response Analysis of Circadian Clocks 15:10 Fri 18 Jul, 2008 :: G04 Napier Building University of Adelaide. :: Prof. Linda Petzold :: Dept. of Mechanical and Environmental Engineering, University of California, Santa Barbara
Circadian clocks govern daily behaviors of organisms in all kingdoms of life. In mammals, the master clock resides in the suprachiasmatic nucleus (SCN) of the hypothalamus. It is composed of thousands of neurons, each of which contains a sloppy oscillator  a molecular clock governed by a transcriptional feedback network. Via intercellular signaling, the cell population synchronizes spontaneously, forming a coherent oscillation. This multioscillator is then entrained to its environment by the daily light/dark cycle.
Both at the cellular and tissular levels, the most important feature of the clock is its ability not simply to keep time, but to adjust its time, or phase, to signals. We present the parametric impulse phase response curve (pIPRC), an analytical analog to the phase response curve (PRC) used experimentally. We use the pIPRC to understand both the consequences of intercellular signaling and the light entrainment process. Further, we determine which model components determine the phase response behavior of a single oscillator by using a novel model reduction technique. We reduce the number of model components while preserving the pIPRC and then incorporate the resultant model into a couple SCN tissue model. Emergent properties, including the ability of the population to synchronize spontaneously are preserved in the reduction. Finally, we present some mathematical tools for the study of synchronization in a network of coupled, noisy oscillators.


The Role of Walls in Chaotic Mixing 15:10 Fri 22 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr JeanLuc Thiffeault :: Department of Mathematics, University of Wisconsin  Madison
I will report on experiments of chaotic mixing in closed and open
vessels, in which a highly viscous fluid is stirred by a moving
rod. In these experiments we analyze quantitatively how the
concentration field of a lowdiffusivity dye relaxes towards
homogeneity, and observe a slow algebraic decay, at odds with the
exponential decay predicted by most previous studies. Visual
observations reveal the dominant role of the vessel wall, which
strongly influences the concentration field in the entire domain and
causes the anomalous scaling. A simplified 1D model supports our
experimental results. Quantitative analysis of the concentration
pattern leads to scalings for the distributions and the variance of
the concentration field consistent with experimental and numerical
results. I also discuss possible ways of avoiding the limiting role
of walls.
This is joint work with Emmanuelle Gouillart, Olivier Dauchot, and
Stephane Roux. 

Free surface Stokes flows with surface tension 15:10 Fri 5 Sep, 2008 :: G03 Napier Building University of Adelaide :: Prof. Darren Crowdy :: Imperial College London
In this talk, we will survey a number of different
free boundary problems involving slow viscous (Stokes) flows
in which surface tension is active on the free boundary. Both steady
and unsteady flows will be considered. Motivating applications
range from industrial processes such as viscous sintering (where
endproducts are formed as a result of the surfacetensiondriven densification
of a compact of smaller particles that are heated in order that they
coalesce) to biological phenomena such as understanding how
organisms swim (i.e. propel themselves) at low Reynolds numbers.
Common to our approach to all these problems will be an
analytical/theoretical treatment of model problems via complex variable methods 
techniques wellknown at infinite Reynolds numbers
but used much less often in the Stokes regime. These model
problems can give helpful insights into the behaviour of the true
physical systems. 

Mathematical modelling of blood flow in curved arteries 15:10 Fri 12 Sep, 2008 :: G03 Napier Building University of Adelaide :: Dr Jennifer Siggers :: Imperial College London
Atherosclerosis, characterised by plaques, is the most common arterial
disease. Plaques tend to develop in regions of low mean wall shear
stress, and regions where the wall shear stress changes direction during
the course of the cardiac cycle. To investigate the effect of the
arterial geometry and driving pressure gradient on the wall shear stress
distribution we consider an idealised model of a curved artery with
uniform curvature. We assume that the flow is fullydeveloped and seek
solutions of the governing equations, finding the effect of the
parameters on the flow and wall shear stress distribution. Most
previous work assumes the curvature ratio is asymptotically small;
however, many arteries have significant curvature (e.g. the aortic arch
has curvature ratio approx 0.25), and in this work we consider in
particular the effect of finite curvature.
We present an extensive analysis of curvedpipe flow driven by a steady
and unsteady pressure gradients. Increasing the curvature causes the
shear stress on the inside of the bend to rise, indicating that the risk
of plaque development would be overestimated by considering only the
weak curvature limit. 

Bursts and canards in a pituitary lactotroph model 15:10 Fri 6 Mar, 2009 :: Napier LG29 :: Dr Martin Wechselberger :: University of Sydney
Bursting oscillations in nerve cells have been the focus of a great deal of attention by mathematicians. These are typically studied by taking advantage of multiple timescales in the system under study to perform a singular perturbation analysis. Bursting also occurs in hormonesecreting pituitary cells, but is characterized by fast bursts with small electrical impulses. Although the separation of timescales is not as clear, singular perturbation analysis is still the key to understand the bursting mechanism. In particular, we will show that canards are responsible for the observed oscillatory behaviour. 

Boltzmann's Equations for Suspension Flow in Porous Media and Correction of the Classical Model 15:10 Fri 13 Mar, 2009 :: Napier LG29 :: Prof Pavel Bedrikovetsky :: University of Adelaide
Suspension/colloid transport in porous media is a basic phenomenon in environmental, petroleum and chemical engineering. Suspension of particles moves through porous media and particles are captured by straining or attraction. We revise the classical equations for particle mass balance and particle capture kinetics and show its nonrealistic behaviour in cases of large dispersion and of flowfree filtration. In order to resolve the paradoxes, the porescale model is derived. The model can be transformed to Boltzmann equation with particle distribution over pores. Introduction of sinksource terms into Boltzmann equation results in much more simple calculations if compared with the traditional ChapmanEnskog averaging procedure. Technique of projecting operators in Hilbert space of Fourier images is used. The projection subspace is constructed in a way to avoid dependency of averaged equations on sinksource terms. The averaging results in explicit expressions for particle flux and capture rate. The particle flux expression describes the effect of advective particle velocity decrease if compared with the carrier water velocity due to preferential capture of "slow" particles in small pores. The capture rate kinetics describes capture from either advective or diffusive fluxes. The equations derived exhibit positive advection velocity for any dispersion and particle capture in immobile fluid that resolves the abovementioned paradox.
Finally, we discuss validation of the model for propagation of contaminants in aquifers, for filtration, for potable water production by artesian wells, for formation damage in oilfields. 

Sloshing in tanks of liquefied natural gas (LNG) vessels 15:10 Wed 22 Apr, 2009 :: Napier LG29 :: Prof. Frederic Dias :: ENS, Cachan
The last scientific conversation I had with Ernie Tuck was on liquid impact. As a matter of fact, we discussed the paper by J.H. Milgram, Journal of Fluid Mechanics 37 (1969), entitled "The motion of a fluid in a cylindrical container with a free surface following vertical impact."
Liquid impact is a key issue in sloshing and in particular in sloshing in tanks of LNG vessels. Numerical simulations of sloshing have been performed by various groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered.
Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends.
The talk is organized as follows: After a brief introduction on the sloshing problem and on scaling laws, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by a finitevolume code with interface reconstruction as well as results obtained by a simple analytical model will be shown to reproduce the trends of experiments on sloshing.
This is joint work with L. Brosset (GazTransport & Technigaz), J.M. Ghidaglia (ENS Cachan) and J.P. Braeunig (INRIA). 

Dynamics of Moving Average Rules in a Continuoustime Financial Market Model 15:10 Fri 8 May, 2009 :: LG29 :: Associate Prof (Tony) Xuezhong He :: University of Technology Sydney
Within a continuoustime framework, this paper proposes a stochastic
heterogeneous agent model (HAM) of financial markets with time
delays to unify various moving average rules used in discretetime
HAMs. Intuitive conditions for the stability of the fundamental price of
the deterministic model in terms of agents' behavior parameters and
time delay are obtained. By focusing on the stabilizing role of the
time delay, it is found that an increase in time delay not only can
destabilize the market price, resulting in oscillatory market price
characterized by a Hopf bifurcation, but also can stabilize an
otherwise unstable market price. Numerical simulations show that the
stochastic model is able to characterize long deviations of the
market price from its fundamental price and excess volatility and
generate most of the stylized facts observed in financial markets.


Averaging reduction for stochastic PDEs 15:10 Fri 5 Jun, 2009 :: LG29 :: Dr Wei Wang :: University of Adelaide
In this talk, I introduce recent work on macroscopic reduction for stochastic PDEs by an averaging method. Furthermore by using a special coupling boundary conditions, a macroscopic discrete approximation model can be derived. 

Dispersing and settling populations in biology 15:10 Tue 23 Jun, 2009 :: Napier G03 :: Prof Kerry Landman :: University of Melbourne
Partial differential equations are used to model populations (such as cells, animals or molecules) consisting of individuals that undergo two important processes: dispersal and settling. I will describe some general characteristics of these systems, as well as some of our recent projects. 

Predicting turbulence 12:10 Wed 12 Aug, 2009 :: Napier 210 :: Dr Trent Mattner :: University of Adelaide
Media...Turbulence is characterised by threedimensional unsteady fluid motion over a wide range of spatial and temporal scales. It is important in many problems of technological and scientific interest, such as drag reduction, energy production and climate prediction. In this talk, I will explain why turbulent flows are difficult to predict and describe a modern mathematical model of turbulence based on a random collection of fluid vortices.


Modelling fluidstructure interactions in microdevices 15:00 Thu 3 Sep, 2009 :: School Board Room :: Dr Richard Clarke :: University of Auckland
The flows generated in many modern microdevices possess very little convective inertia, however, they can be highly unsteady and exert substantial hydrodynamic forces on the device components. Typically these components exhibit some degree of compliance, which traditionally has been treated using simple onedimensional elastic beam models. However, recent findings have suggested that threedimensional effects can be important and, accordingly, we consider the elastohydrodynamic response of a rapidly oscillating threedimensional elastic plate that is immersed in a viscous fluid. In addition, a preliminary model will be presented which incorporates the presence of a nearby elastic wall. 

Modelling and pricing for portfolio credit derivatives 15:10 Fri 16 Oct, 2009 :: MacBeth Lecture Theatre :: Dr Ben Hambly :: University of Oxford
The current financial crisis has been in part precipitated by the
growth of complex credit derivatives and their mispricing. This talk
will discuss some of the background to the `credit crunch', as well as
the models and methods used currently. We will then develop an alternative
view of large basket credit derivatives, as functions of a stochastic
partial differential equation, which addresses some of the shortcomings. 

Nonlinear time series econometrics and financial econometrics: a personal overview 15:10 Fri 12 Mar, 2010 :: Napier G04 :: Prof Jiti Gao :: University of Adelaide
Through using ten examples, the talk focuses on the recent development on nonlinear time series econometrics and financial econometrics.
Such examples cover the following models:
1. Nonlinear time series trend model;
2. Partially linear autoregressive model;
3. Nonlinear capital asset pricing model;
4. Additive capital asset pricing model;
5. Varyingcoefficient capital asset pricing model;
6. Semiparametric errorterm model;
7. Nonlinear and nonstationary model;
8. Partially linear ARCH model;
9. Continuoustime financial model; and
10. Stochastic volatility model. 

American option pricing in a Markov chain market model 15:10 Fri 19 Mar, 2010 :: School Board Room :: Prof Robert Elliott :: School of Mathematical Sciences, University of Adelaide
This paper considers a model for asset pricing in a world where
the randomness is modeled by a Markov chain rather than Brownian motion.
In this paper we develop a theory of optimal stopping and related
variational inequalities for American options in this model. A version of
Saigal's Lemma is established and numerical algorithms developed.
This is a joint work with John van der Hoek. 

The fluid mechanics of gels used in tissue engineering 15:10 Fri 9 Apr, 2010 :: Santos Lecture Theatre :: Dr Edward Green :: University of Western Australia
Tissue engineering could be called 'the science of spare parts'.
Although currently in its infancy, its longterm aim is to grow
functional tissues and organs in vitro to replace those which have
become defective through age, trauma or disease. Recent experiments
have shown that mechanical interactions between cells and the materials
in which they are grown have an important influence on tissue
architecture, but in order to understand these effects, we first need to
understand the mechanics of the gels themselves.
Many biological gels (e.g. collagen) used in tissue engineering have a
fibrous microstructure which affects the way forces are transmitted
through the material, and which in turn affects cell migration and other
behaviours. I will present a simple continuum model of gel mechanics,
based on treating the gel as a transversely isotropic viscous material.
Two canonical problems are considered involving thin twodimensional
films: extensional flow, and squeezing flow of the fluid between two
rigid plates. Neglecting inertia, gravity and surface tension, in each
regime we can exploit the thin geometry to obtain a leadingorder
problem which is sufficiently tractable to allow the use of analytical
methods. I discuss how these results could be exploited practically to
determine the mechanical properties of real gels. If time permits, I
will also talk about work currently in progress which explores the
interaction between gel mechanics and cell behaviour. 

Mathematical epidemiology with a focus on households 15:10 Fri 23 Apr, 2010 :: Napier G04 :: Dr Joshua Ross :: University of Adelaide
Mathematical models are now used routinely to inform national and global policymakers on issues that threaten human health or which have an adverse impact on the economy. In the first part of this talk I will provide an overview of mathematical epidemiology starting with the classical deterministic model and leading to some of the current challenges. I will then present some of my recently published work which provides computationallyefficient methods for studying a mathematical model incorporating household structure. We will conclude by briefly discussing some "workinprogess" which utilises these methods to address the issues of inference, and mixing pattern and contact structure, for emerging infections. 

Functorial 2connected covers 13:10 Fri 21 May, 2010 :: School Board Room :: David Roberts :: University of Adelaide
The Whitehead tower of a topological space seeks to resolve that space by successively removing homotopy groups from the 'bottom up'. For a pathconnected space with no 1dimensional local pathologies the first stage in the tower can be chosen to be the universal (=1connected) covering space. This construction also works in the category Diff of manifolds. However, further stages in the two known constructions of the Whitehead tower do not work in Diff, being purely topological  and one of these is nonfunctorial, depending on a large number of choices. This talk will survey results from my thesis which constructs a new, functorial model for the 2connected cover which will lift to a generalised (2)category of smooth objects.
This talk contains joint work with Andrew Stacey of the Norwegian University of Science and Technology. 

A variance constraining ensemble Kalman filter: how to improve forecast using climatic data of unobserved variables 15:10 Fri 28 May, 2010 :: Santos Lecture Theatre :: A/Prof Georg Gottwald :: The University of Sydney
Data assimilation aims to solve one of the fundamental problems ofnumerical weather prediction  estimating the optimal state of the
atmosphere given a numerical model of the dynamics, and sparse, noisy
observations of the system. A standard tool in attacking this
filtering problem is the Kalman filter.
We consider the problem when only partial observations are available.
In particular we consider the situation where the observational space
consists of variables which are directly observable with known
observational error, and of variables of which only their climatic
variance and mean are given. We derive the corresponding Kalman
filter in a variational setting.
We analyze the variance constraining Kalman filter (VCKF) filter for
a simple linear toy model and determine its range of optimal
performance. We explore the variance constraining Kalman filter in an
ensemble transform setting for the Lorenz96 system, and show that
incorporating the information on the variance on some unobservable
variables can improve the skill and also increase the stability of
the data assimilation procedure.
Using methods from dynamical systems theory we then systems where the
unobserved variables evolve deterministically but chaotically on a
fast time scale.
This is joint work with Lewis Mitchell and Sebastian Reich.


A spatialtemporal point process model for fine resolution multisite rainfall data from Roma, Italy 14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology
A point process rainfall model is further developed that has storm origins occurring in spacetime according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in twodimensional
space, where the storm radii are taken to be independent exponential random
variables. Storm origins are of random type z, where z follows a continuous
probability distribution. Cell origins occur in a further spatial Poisson
process and have arrival times that follow a NeymanScott point process. Cell
origins have random radii so that cells form discs in twodimensional space.
Statistical properties up to third order are derived and used to fit the model
to 10 min series taken from 23 sites across the Roma region, Italy.
Distributional properties of the observed annual maxima are compared to
equivalent values sampled from series that are simulated using the fitted
model. The results indicate that the model will be of use in urban drainage
projects for the Roma region.


Compound and constrained regression analyses for EIV models 15:05 Fri 27 Aug, 2010 :: Napier LG28 :: Prof Wei Zhu :: State University of New York at Stony Brook
In linear regression analysis, randomness often exists in the independent variables and the resulting models are referred to errorsinvariables (EIV) models. The existing general EIV modeling framework, the structural model approach, is parametric and dependent on the usually unknown underlying distributions. In this work, we introduce a general nonparametric EIV modeling framework, the compound regression analysis, featuring an intuitive geometric representation and a 11 correspondence to the structural model. Properties, examples and further generalizations of this new modeling approach are discussed in this talk. 

A polyhedral model for boron nitride nanotubes 15:10 Fri 3 Sep, 2010 :: Napier G04 :: Dr Barry Cox :: University of Adelaide
The conventional rolledup model of nanotubes does not apply to the very small radii tubes, for which curvature effects become significant. In this talk an existing geometric model for carbon nanotubes proposed by the authors, which accommodates this deficiency and which is based on the exact polyhedral cylindrical structure, is extended to a nanotube structure involving two species of atoms in equal proportion, and in particular boron nitride nanotubes. This generalisation allows the principle features to be included as the fundamental assumptions of the model, such as equal bond length but distinct bond angles and radii between the two species. The polyhedral model is based on the five simple geometric assumptions: (i) all bonds are of equal length, (ii) all bond angles for the boron atoms are equal, (iii) all boron atoms lie at an equal distance from the nanotube axis, (iv) all nitrogen atoms lie at an equal distance from the nanotube axis, and (v) there exists a fixed ratio of pyramidal height H, between the boron species compared with the corresponding height in a symmetric single species nanotube.
Working from these postulates, expressions are derived for the various structural parameters such as radii and bond angles for the two species for specific values of the chiral vector numbers (n,m). The new model incorporates an additional constant of proportionality H, which we assume applies to all nanotubes comprising the same elements and is such that H = 1 for a single species nanotube. Comparison with `ab initio' studies suggest that this assumption is entirely reasonable, and in particular we determine the value H = 0.56\pm0.04 for boron nitride, based on computational results in the literature.
This talk relates to work which is a couple of years old and given time at the end we will discuss some newer results in geometric models developed with our former student Richard Lee (now also at the University of Adelaide as a post doc) and some workinprogress on carbon nanocones.
Note: pyramidal height is our own terminology and will be explained in the talk.


Hugs not drugs 15:10 Mon 20 Sep, 2010 :: Ingkarni Wardli B17 :: Dr Scott McCue :: Queensland University of Technology
I will discuss a model for drug diffusion that involves a Stefan problem with a "kinetic undercooling". I like Stefan problems, so I like this model. I like drugs too, but only legal ones of course. Anyway, it turns out that in some parameter regimes, this sophisticated moving boundary problem hardly works better than a simple linear undergraduate model (there's a lesson here for mathematical modelling). On the other hand, for certain polymer capsules, the results are interesting and suggest new means for controlled drug delivery. If time permits, I may discuss certain asymptotic limits that are of interest from a Stefan problem perspective. Finally, I won't bring any drugs with me to the seminar, but I'm willing to provide hugs if necessary. 

The mathematics of smell 15:10 Wed 29 Sep, 2010 :: Ingkarni Wardli 5.57 :: Dr Michael Borgas :: CSIRO Light Metals Flagship; Marine and Atmospheric Research; Centre for Australian Weather and Clim
The sense of smell is important in nature, but the least well understood of our senses. A mathematical model of smell, which combines the transmission of volatileorganiccompound chemical signals (VOCs) on the wind, transduced by olfactory receptors in our noses into neural information, and assembled into our odour perception, is useful. Applications include regulations for odour nuisance, like German VDI protocols for calibrated noses, to the design of modern chemical sensors for extracting information from the environment and even for the perfume industry. This talk gives a broad overview of turbulent mixing in surface layers of the atmosphere, measurements of VOCs with PTRMS (proton transfer reaction mass spectrometers), our noses, and integrated environmental models of the Alumina industry (a source of odour emissions) to help understand the science of smell. 

Arbitrage bounds for weighted variance swap prices 15:05 Fri 3 Dec, 2010 :: Napier LG28 :: Prof Mark Davis :: Imperial College London
This paper builds on earlier work by Davis and Hobson (Mathematical Finance,
2007) giving modelfreeexcept for a 'frictionless markets' assumption
necessary and sufficient conditions for absence of arbitrage given a set of
currenttime put and call options on some underlying asset. Here we suppose
that the prices of a set of put options, all maturing at the same time, are
given and satisfy the conditions for consistency with absence of arbitrage.
We
now add a pathdependent option, specifically a weighted variance swap, to
the
set of traded assets and ask what are the conditions on its time0 price
under
which consistency with absence of arbitrage is maintained. In the present
work,
we work under the extra modelling assumption that the underlying asset price
process has continuous paths. In general, we find that there is always a
non
trivial lower bound to the range of arbitragefree prices, but only in the
case
of a corridor swap do we obtain a finite upper bound. In the case of, say,
the
vanilla variance swap, a finite upper bound exists when there are additional
traded European options which constrain the left wing of the volatility
surface
in appropriate ways. 

Queues with skill based routing under FCFS–ALIS regime 15:10 Fri 11 Feb, 2011 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, Israel
We consider a system where jobs of several types are served by servers
of several types, and a bipartite graph between server types and job types
describes feasible assignments. This is a common situation in manufacturing,
call centers with skill based routing, matching of parentchild in adoption or
matching in kidney transplants etc. We consider the case of first come first
served policy: jobs are assigned to the first available feasible server in
order of their arrivals. We consider two types of policies for assigning
customers to idle servers  a random assignment and assignment to the longest
idle server (ALIS) We survey some results for four different situations:
 For a loss system we find conditions for reversibility and insensitivity.
 For a manufacturing type system, in which there is enough capacity to serve
all jobs, we discuss a product form solution and waiting times.
 For an infinite matching model in which an infinite sequence of customers of
IID types, and infinite sequence of servers of IID types are matched
according to first come first, we obtain a product form stationary
distribution for this system, which we use to calculate matching rates.
 For a call center model with overload and abandonments we make some plausible
observations.
This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed
Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and
Ward Whitt.


Heat transfer scaling and emergence of threedimensional flow in horizontal convection 15:10 Fri 25 Feb, 2011 :: Conference Room Level 7 Ingkarni Wardli :: Dr Greg Sheard :: Monash University
Horizontal convecton refers to flows driven by uneven heating on a horizontal forcing boundary. Flows exhibiting these characteristics are prevalent in nature, and include the NorthSouth Hadley circulation within the atmosphere between warmer and more temperate latitudes, as well as ocean currents driven by nonuniform heating via solar radiation.
Here a model for these generic convection flows is established featuring a rectangular enclosure, insulated on the side and top
walls, and driven by a linear temperature gradient applied along the bottom wall. Rayleigh number dependence of heat transfer
through the forcing boundary is computed and compared with theory. Attention is given to transitions in the flow, including the
development of unsteady flow and threedimensional flow: the effect of these transitions on the NusseltRayleigh number scaling exponents is described.


What is a padic number? 12:10 Mon 28 Feb, 2011 :: 5.57 Ingkarni Wardli :: Alexander Hanysz :: University of Adelaide
The padic numbers are:
(a) something that visiting seminar speakers invoke when the want to frighten the audience;
(b) a fascinating and useful concept in modern algebra;
(c) alphabetically just before qadic numbers?
In this talk I hope to convince the audience that option (b) is worth considering. I will begin by reviewing how we get from integers via rational numbers to the real number system. Then we'll look at how this process can be "twisted" to produce something new. 

Mathematical modelling in nanotechnology 15:10 Fri 4 Mar, 2011 :: 7.15 Ingkarni Wardli :: Prof Jim Hill :: University of Adelaide
Media...In this talk we present an overview of the mathematical modelling contributions of the Nanomechanics Groups at the Universities of Adelaide and Wollongong. Fullerenes and carbon nanotubes have unique properties, such as low weight, high strength, flexibility, high thermal conductivity and chemical stability, and they have many potential applications in nanodevices. In this talk we first present some new results on the geometric structure of carbon nanotubes and on related nanostructures. One concept that has attracted much attention is the creation of nanooscillators, to produce frequencies in the gigahertz range, for applications such as ultrafast optical filters and nanoantennae. The sliding of an inner shell inside an outer shell of a multiwalled carbon nanotube can generate oscillatory frequencies up to several gigahertz, and the shorter the inner tube the higher the frequency. A C60nanotube oscillator generates high frequencies by oscillating a C60 fullerene inside a singlewalled carbon nanotube. Here we discuss the underlying mechanisms of nanooscillators and using the LennardJones potential together with the continuum approach, to mathematically model the C60nanotube nanooscillator. Finally, three illustrative examples of recent modelling in hydrogen storage, nanomedicine and nanocomputing are discussed. 

To which extent the model of BlackScholes can be applied in the financial market? 12:10 Mon 21 Mar, 2011 :: 5.57 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide
Black and Scholes have introduced a new approach to model the stock price dynamics about three decades ago. The so called Black Scholes model seems to be very adapted to the nature of market prices, mainly because the usage of the Brownian motion and the mathematical properties that follow from. Like every theoretical model, put in practice, it does not appear to be flawless, that means that new adaptations and extensions should be made so that engineers and marketers could utilise the Black Scholes models to trade and hedge risk on the market. A more detailed description with application will be given in the talk. 

A mathematical investigation of methane encapsulation in carbon nanotubes. 12:10 Mon 21 Mar, 2011 :: 5.57 Ingkarni Wardli :: Olumide Adisa :: University of Adelaide
I hope we don't have to wait until oil and coal run out before we tackle that."  Thomas Edison, 1931. In a bid to resolve energy issues consistent with Thomas Edison's worries, scientists have been looking at other clean and sustainable sources of energy such as natural gas  methane. In this talk, the interaction between a methane molecule and carbon nanotubes is investigated mathematically, using two different models  first discrete and second, continuous. These models are analyzed to determine the dimensions of the particular nanotubes which will readily suckup methane molecules. The results determine the minimum and maximum interaction energies required for methane encapsulation in different tube sizes, and establish the second model of the methane molecule as a simple and elegant model which might be exploited for other problems. 

Nanotechnology: The mathematics of gas storage in metalorganic frameworks. 12:10 Mon 28 Mar, 2011 :: 5.57 Ingkarni Wardli :: Wei Xian Lim :: University of Adelaide
Have you thought about what sort of car you would be driving in the future? Would it be a hybrid, solar, hydrogen or electric car? I would like to be driving a hydrogen car because my field of research may aid in their development! In my presentation I will introduce you to the world of metalorganic frameworks, which are an exciting new class of materials that have great potential in applications such as hydrogen gas storage. I will also discuss about the mathematical model that I am using to model the performance of metalorganic frameworks based on beryllium. 

Modelling of Hydrological Persistence in the MurrayDarling Basin for the Management of Weirs 12:10 Mon 4 Apr, 2011 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of Adelaide
The lakes and weirs along the lower Murray River in Australia are aggregated and
considered as a sequence of five reservoirs. A seasonal Markov chain model for
the system will be implemented, and a stochastic dynamic program will be used to
find optimal release strategies, in terms of expected monetary value (EMV), for
the competing demands on the water resource given the stochastic nature of
inflows. Matrix analytic methods will be used to analyse the system further, and
in particular enable the full distribution of first passage times between any
groups of states to be calculated. The full distribution of first passage times
can be used to provide a measure of the risk associated with optimum EMV
strategies, such as conditional value at risk (CVaR). The sensitivity of the
model, and risk, to changing rainfall scenarios will be investigated. The effect
of decreasing the level of discretisation of the reservoirs will be explored.
Also, the use of matrix analytic methods facilitates the use of hidden states to
allow for hydrological persistence in the inflows. Evidence for hydrological
persistence of inflows to the lower Murray system, and the effect of making
allowance for this, will be discussed. 

On parameter estimation in population models 15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide
Essential to applying a mathematical model to a realworld application is
calibrating the model to data. Methods for calibrating population models
often become computationally infeasible when the populations size (more generally
the size of the state space) becomes large, or other complexities such as
timedependent transition rates, or sampling error, are present. Here we
will discuss the use of diffusion approximations to perform estimation in several
scenarios, with successively reduced assumptions: (i) under the assumption
of stationarity (the process had been evolving for a very long time with
constant parameter values); (ii) transient dynamics (the assumption of stationarity
is invalid, and thus only constant parameter values may be assumed); and, (iii)
timeinhomogeneous chains (the parameters may vary with time) and accounting
for observation error (a sample of the true state is observed). 

When statistics meets bioinformatics 12:10 Wed 11 May, 2011 :: Napier 210 :: Prof Patty Solomon :: School of Mathematical Sciences
Media...Bioinformatics is a new field of research which encompasses mathematics, computer science, biology, medicine and the physical sciences. It has arisen from the need to handle and analyse the vast amounts of data being generated by the new genomics technologies. The interface of these disciplines used to be informationpoor, but is now informationmegarich, and statistics plays a central role in processing this information and making it intelligible. In this talk, I will describe a published bioinformatics study which claimed to have developed a simple test for the early detection of ovarian cancer from a blood sample. The US Food and Drug Administration was on the verge of approving the test kits for market in 2004 when demonstrated flaws in the study design and analysis led to its withdrawal. We are still waiting for an effective early biomarker test for ovarian cancer. 

Statistical challenges in molecular phylogenetics 15:10 Fri 20 May, 2011 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania
Media...This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a modelbased approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students. 

Statistical modelling in economic forecasting: semiparametrically spatiotemporal approach 12:10 Mon 23 May, 2011 :: 5.57 Ingkarni Wardli :: Dawlah Alsulami :: University of Adelaide
How to model spatiotemporal variation of housing prices is an important and challenging problem as it is of vital importance for both investors and policy makersto assess any movement in housing prices. In this seminar I will talk about the proposed model to estimate any movement in housing prices and measure the risk more accurately. 

Optimal experimental design for stochastic population models 15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane
Markov population processes are popular models for studying a wide range of
phenomena including the spread of disease, the evolution of chemical reactions
and the movements of organisms in population networks (metapopulations). Our
ability to use these models effectively can be limited by our knowledge about
parameters, such as disease transmission and recovery rates in an epidemic.
Recently, there has been interest in devising optimal experimental designs for
stochastic models, so that practitioners can collect data in a manner that
maximises the precision of maximum likelihood estimates of the parameters for
these models. I will discuss some recent work on optimal design for a variety
of population models, beginning with some simple oneparameter models where the
optimal design can be obtained analytically and moving on to more complicated
multiparameter models in epidemiology that involve latent states and
nonexponentially distributed infectious periods. For these more complex
models, the optimal design must be arrived at using computational methods and we
rely on a Gaussian diffusion approximation to obtain analytical expressions for
Fisher's information matrix, which is at the heart of most optimality criteria
in experimental design. I will outline a simple crossentropy algorithm that
can be used for obtaining optimal designs for these models. We will also
explore the improvements in experimental efficiency when using the optimal
design over some simpler designs, such as the design where observations are
spaced equidistantly in time. 

Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Stochastic models of reaction diffusion 15:10 Fri 17 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Jon Chapman :: Oxford University
Media...We consider two different position jump processes: (i) a random
walk on a lattice (ii) the Euler scheme for the Smoluchowski
differential equation. Both of these reduce to the diffusion equation as the time step
and size of the jump tend to zero.
We consider the problem of adding chemical reactions to these
processes, both at a surface and in the bulk. We show how the
"microscopic" parameters should be chosen to achieve the correct
"macroscopic" reaction rate. This choice is found to depend on
which stochastic model for diffusion is used. 

Alignment of time course gene expression data sets using Hidden Markov Models 12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide
Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards.
Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data. 

Statistical analysis of metagenomic data from the microbial community involved in industrial bioleaching 12:10 Mon 19 Sep, 2011 :: 5.57 Ingkarni Wardli :: Ms Susana SotoRojo :: University of Adelaide
In the last two decades heap bioleaching has become established as a successful commercial option for recovering copper from lowgrade secondary sulfide ores. Geneticsbased approaches have recently been employed in the task of characterizing mineral processing bacteria. Data analysis is a key issue and thus the implementation of adequate mathematical and statistical tools is of fundamental importance to draw reliable conclusions. In this talk I will give a recount of two specific problems that we have been working on. The first regarding experimental design and the latter on modeling composition and activity of the microbial consortium. 

Understanding the dynamics of event networks 15:00 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford
Within many populations there are frequent communications between
pairs of individuals. Such communications might be emails sent within a
company, radio communications in a disaster zone or diplomatic
communications
between states. Often it is of interest to understand the factors that
drive the observed patterns of such communications, or to study how these
factors are changing over over time. Communications can be thought of as
events
occuring on the edges of a network which connects individuals in the
population.
In this talk I'll present a model for such communications which uses ideas
from
social network theory to account for the complex correlation structure
between
events. Applications to the Enron email corpus and the dynamics of hospital
ward transfer patterns will be discussed. 

On the role of mixture distributions in the modelling of heterogeneous data 15:10 Fri 14 Oct, 2011 :: 7.15 Ingkarni Wardli :: Prof Geoff McLachlan :: University of Queensland
Media...We consider the role that finite mixture distributions have played in the modelling of heterogeneous data, in particular for clustering continuous data via mixtures of normal distributions. A very brief history is given starting with the seminal papers by Day and Wolfe in the sixties before the appearance of the EM algorithm. It was the publication in 1977 of the latter algorithm by Dempster, Laird, and Rubin that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood estimation from data that can be viewed as being incomplete. In recent times there has been a proliferation of applications in which the number of experimental units n is comparatively small but the underlying dimension p is extremely large as, for example, in microarraybased genomics and other highthroughput experimental approaches. Hence there has been increasing attention given not only in bioinformatics and machine learning, but also in mainstream statistics, to the analysis of complex data in this situation where n is small relative to p. The latter part of the talk shall focus on the modelling of such highdimensional data using mixture distributions. 

Forecasting electricity demand distributions using a semiparametric additive model 15:10 Fri 16 Mar, 2012 :: B.21 Ingkarni Wardli :: Prof Rob Hyndman :: Monash University
Media...Electricity demand forecasting plays an important role in shortterm load allocation and longterm planning for future generation facilities and transmission augmentation. Planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty.
Electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usage. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays.
I will describe a comprehensive forecasting solution designed to take all the available information into account, and to provide forecast distributions from a few hours ahead to a few decades ahead. We use semiparametric additive models to estimate the relationships between demand and the covariates, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks.
The model is being used by the state energy market operators and some electricity supply companies to forecast the probability distribution of electricity demand in various regions of Australia. It also underpinned the Victorian Vision 2030 energy strategy. 

Fasttrack study of viscous flow over topography using 'Smoothed Particle Hydrodynamics' 12:10 Mon 16 Apr, 2012 :: 5.57 Ingkarni Wardli :: Mr Stephen Wade :: University of Adelaide
Media...Motivated by certain tea room discussions, I am going to (attempt to) model the flow of a viscous fluid under gravity over conical topography. The method used is 'Smoothed Particle Hydrodynamics' (SPH), which is an easytouse but perhaps limitedaccuracy computational method. The model could be extended to include solidification and thermodynamic effects that can also be implemented within the framework of SPH, and this has the obvious practical application to the modelling of the coverage of ice cream with ice magic, I mean, lava flows.
If I fail to achieve this within the next 4 weeks, I will have to go through a talk on SPH that I gave during honours instead. 

Multiscale models of collective cell behaviour: Linear or nonlinear diffusion? 15:10 Fri 4 May, 2012 :: B.21 Ingkarni Wardli :: Dr Matthew Simpson :: Queensland University of Technology
Media...Continuum diffusion models are often used to represent the collective motion of cell populations. Most previous studies have simply used linear diffusion to represent collective cell spreading, while others found that degenerate nonlinear diffusion provides a better match to experimental cell density profiles. There is no guidance available in the mathematical biology literature with regard to which approach is more appropriate. Furthermore, there is no knowledge of particular experimental measurements that can be made to distinguish between situations where these two models are appropriate. We provide a link between individualbased and continuum models using a multiscale approach in which we analyse the collective motion of a population of interacting agents in a generalized latticebased exclusion process. For round agents that occupy a single lattice site, we find that the relevant continuum description is a linear diffusion equation, whereas for elongated rodshaped agents that occupy L adjacent lattice sites we find that the relevant continuum description is a nonlinear diffusion equation related to the porous media equation. We show that there are several reasonable approaches for dealing with agent size effects, and that these different approaches are related mathematically through the concept of mean action time. We extend our results to consider proliferation and travelling waves where greater care must be taken to ensure that the continuum model replicates the discrete process. This is joint work with Dr Ruth Baker (Oxford) and Dr Scott McCue (QUT). 

Are Immigrants Discriminated in the Australian Labour Market? 12:10 Mon 7 May, 2012 :: 5.57 Ingkarni Wardli :: Ms Wei Xian Lim :: University of Adelaide
Media...In this talk, I will present what I did in my honours project, which was to determine if immigrants, categorised as immigrants from English speaking countries and NonEnglish speaking countries, are discriminated in the Australian labour market. To determine if discrimination exists, a decomposition of the wage function is applied and analysed via regression analysis. Two different methods of estimating the unknown parameters in the wage function will be discussed:
1. the Ordinary Least Square method,
2. the Quantile Regression method.
This is your rare chance of hearing me talk about nonnanomathematics related stuff! 

Modelling protective antitumour immunity using a hybrid agentbased and delay differential equation approach 15:10 Fri 11 May, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Kim :: University of Sydney
Media...Although cancers seem to consistently evade current medical treatments, the body's immune defences seem quite effective at controlling incipient tumours. Understanding how our immune systems provide such protection against earlystage tumours and how this protection could be lost will provide insight into designing nextgeneration immune therapies against cancer. To engage this problem, we formulate a mathematical model of the immune response against small, incipient tumours. The model considers the initial stimulation of the immune response in lymph nodes and the resulting immune attack on the tumour and is formulated as a hybrid agentbased and delay differential equation model. 

Evaluation and comparison of the performance of Australian and New Zealand intensive care units 14:10 Fri 25 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Jessica Kasza :: The University of Adelaide
Media...Recently, the Australian Government has emphasised the need for monitoring and comparing the performance of Australian hospitals. Evaluating the performance of intensive care units (ICUs) is of particular importance, given that the most severe cases are treated in these units. Indeed, ICU performance can be thought of as a proxy for the overall performance of a hospital. We compare the performance of the ICUs contributing to the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database, the largest of its kind in the world, and identify those ICUs with unusual performance.
It is wellknown that there are many statistical issues that must be accounted for in the evaluation of healthcare provider performance. Indicators of performance must be appropriately selected and estimated, investigators must adequately adjust for casemix, statistical variation must be fully accounted for, and adjustment for multiple comparisons must be made. Our basis for dealing with these issues is the estimation of a hierarchical logistic model for the inhospital death of each patient, with patients clustered within ICUs. Both patient and ICUlevel covariates are adjusted for, with a random intercept and random coefficient for the APACHE III severity score. Given that we expect most ICUs to have similar performance after adjustment for these covariates, we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we expect the majority of ICUs to follow. This methodology allows us to rigorously account for the aforementioned statistical issues, and accurately identify those ICUs contributing to the ANZICS database that have comparatively unusual performance. This is joint work with Prof. Patty Solomon and Assoc. Prof. John Moran. 

The change of probability measure for jump processes 12:10 Mon 28 May, 2012 :: 5.57 Ingkarni Wardli :: Mr Ahmed Hamada :: University of Adelaide
Media...In financial derivatives pricing theory, it is very common to change the probability measure from historical measure "real world" into a RiskNeutral measure as a development of the non arbitrage condition.
Girsanov theorem is the most known example of this technique and is used when prices randomness is modelled by Brownian motions. Other genuine candidates for modelling market randomness that have proved efficiency in recent literature are jump process, so how can a change of measure be performed for such processes?
This talk will address this question by introducing the non arbitrage condition, discussing Girsanov theorem for diffusion and jump processes and presenting a concrete example. 

Model turbulent floods based upon the Smagorinski large eddy closure 12:10 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Meng Cao :: University of Adelaide
Media...Rivers, floods and tsunamis are often very turbulent. Conventional models of such environmental fluids are typically based on depthaveraged inviscid irrotational flow equations. We explore changing such a base to the turbulent Smagorinski large eddy closure. The aim is to more appropriately model the fluid dynamics of such complex environmental fluids by using such a turbulent closure. Large changes in fluid depth are allowed. Computer algebra constructs the slow manifold of the flow in terms of the fluid depth h and the mean turbulent lateral velocities u and v. The major challenge is to deal with the nonlinear stress tensor in the Smagorinski closure. The model integrates the effects of inertia, selfadvection, bed drag, gravitational forcing and turbulent dissipation with minimal assumptions. Although the resultant model is close to established models, the real outcome is creating a sound basis for the modelling so others, in their modelling of more complex situations, can systematically include more complex physical processes. 

A brief introduction to Support Vector Machines 12:30 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Tyman Stanford :: University of Adelaide
Media...Support Vector Machines (SVMs) are used in a variety of contexts for a range of purposes including regression, feature selection and classification. To convey the basic principles of SVMs, this presentation will focus on the application of SVMs to classification. Classification (or discrimination), in a statistical sense, is supervised model creation for the purpose of assigning future observations to a group or class. An example might be determining healthy or diseased labels to patients from p characteristics obtained from a blood sample.
While SVMs are widely used, they are most successful when the data have one or more of the following properties:
The data are not consistent with a standard probability distribution.
The number of observations, n, used to create the model is less than the number of predictive features, p. (The socalled smalln, bigp problem.)
The decision boundary between the classes is likely to be nonlinear in the feature space.
I will present a short overview of how SVMs are constructed, keeping in mind their purpose. As this presentation is part of a double postgrad seminar, I will keep it to a maximum of 15 minutes.


Epidemiological consequences of householdbased antiviral prophylaxis for pandemic influenza 14:10 Fri 8 Jun, 2012 :: 7.15 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide
Media...Antiviral treatment offers a fast acting alternative to vaccination. It is viewed as a firstline of defence against pandemic influenza, protecting families and household members once infection has been detected. In clinical trials antiviral treatment has been shown to be efficacious in preventing infection, limiting disease and reducing transmission, yet their impact at containing the 2009 influenza A(H1N1)pdm outbreak was limited. I will describe some of our work, which attempts to understand this seeming discrepancy, through the development of a general model and computationally efficient methodology for studying householdbased interventions.
This is joint work with Dr Andrew Black (Adelaide), and Prof. Matt Keeling and Dr Thomas House (Warwick, U.K.). 

Adventures with group theory: counting and constructing polynomial invariants for applications in quantum entanglement and molecular phylogenetics 15:10 Fri 8 Jun, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Jarvis :: The University of Tasmania
Media...In many modelling problems in mathematics and physics, a standard
challenge is dealing with several repeated instances of a system under
study. If linear transformations are involved, then the machinery of
tensor products steps in, and it is the job of group theory to control how
the relevant symmetries lift from a single system, to having many copies.
At the level of group characters, the construction which does this is
called PLETHYSM.
In this talk all this will be contextualised via two case studies:
entanglement invariants for multipartite quantum systems, and Markov
invariants for tree reconstruction in molecular phylogenetics. By the end
of the talk, listeners will have understood why Alice, Bob and Charlie
love Cayley's hyperdeterminant, and they will know why the three squangles
 polynomial beasts of degree 5 in 256 variables, with a modest 50,000
terms or so  can tell us a lot about quartet trees! 

IGA Workshop: Dendroidal sets 14:00 Tue 12 Jun, 2012 :: Ingkarni Wardli B17 :: Dr Ittay Weiss :: University of the South Pacific
Media...A series of four 2hour lectures by Dr. Ittay Weiss.
The theory of dendroidal sets was introduced by Moerdijk and Weiss in 2007 in the study of homotopy operads in algebraic topology. In the five years that have past since then several fundamental and highly nontrivial results were established. For instance, it was established that dendroidal sets provide models for homotopy operads in a way that extends the JoyalLurie approach to homotopy categories. It can be shown that dendroidal sets provide new models in the study of nfold loop spaces. And it is very recently shown that dendroidal sets model all connective spectra in a way that extends the modeling of certain spectra by Picard groupoids.
The aim of the lecture series will be to introduce the concepts mentioned above, present the elementary theory, and understand the scope of the results mentioned as well as discuss the potential for further applications. Sources for the course will include the article "From Operads to Dendroidal Sets" (in the AMS volume on mathematical foundations of quantum field theory (also on the arXiv)) and the lecture notes by Ieke Moerdijk "simplicial methods for operads and algebraic geometry" which resulted from an advanced course given in Barcelona 3 years ago.
No prior knowledge of operads will be assumed nor any knowledge of homotopy theory that is more advanced then what is required for the definition of the fundamental group. The basics of the language of presheaf categories will be recalled quickly and used freely. 

Comparison of spectral and wavelet estimators of transfer function for linear systems 12:10 Mon 18 Jun, 2012 :: B.21 Ingkarni Wardli :: Mr Mohd Aftar Abu Bakar :: University of Adelaide
Media...We compare spectral and wavelet estimators of the response amplitude operator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system.
HBWEDs and other single degree of freedom wave energy devices such as the oscillating wave surge convertors (OWSC) are currently deployed in the ocean, making single degree of freedom wave energy devices important systems to both model and analyse in some detail. However, the results of the comparison relate to any linear system.
It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the crossspectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the crossspectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spectrum estimates can provide a good estimator of the RAO. However, a combination of wavelet and spectral methods is introduced as an alternative RAO estimator.
The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom binary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency. 

Inquirybased learning: yesterday and today 15:30 Mon 9 Jul, 2012 :: Ingkarni Wardli B19 :: Prof Ron Douglas :: Texas A&M University
Media...The speaker will report on a project to develop and promote approaches to mathematics instruction closely related to the Moore method  methods which are called inquirybased learning  as well as on his personal experience of the Moore method. For background, see the speaker's article in the May 2012 issue of the Notices of the American Mathematical Society. To download the article, click on "Media" above. 

2012 AMSISSAI Lecture: Approximate Bayesian computation (ABC): advances and limitations 11:00 Fri 13 Jul, 2012 :: Engineering South S112 :: Prof Christian Robert :: Universite ParisDauphine
Media...The lack of closed form likelihoods has been the bane of Bayesian computation for many years and, prior to the introduction of MCMC methods, a strong impediment to the propagation of the Bayesian paradigm. We are now facing models where an MCMC completion of the model towards closedform likelihoods seems unachievable and where a further degree of approximation appears unavoidable. In this talk, I will present the motivation for approximative Bayesian computation (ABC) methods, the consistency results already available, the various Monte Carlo implementations found in the current literature, as well as the inferential, rather than computational, challenges set by these methods. A recent advance based on empirical likelihood will also be discussed. 

Infectious diseases modelling: from biology to public health policy 15:10 Fri 24 Aug, 2012 :: B.20 Ingkarni Wardli :: Dr James McCaw :: The University of Melbourne
Media...The mathematical study of humantohuman transmissible pathogens has
established itself as a complementary methodology to the traditional
epidemiological approach. The classic susceptibleinfectiousrecovered
model paradigm has been used to great effect to gain insight into the
epidemiology of endemic diseases such as influenza and pertussis, and
the emergence of novel pathogens such as SARS and pandemic influenza.
The modelling paradigm has also been taken within the host and used to
explain the withinhost dynamics of viral (or bacterial or parasite)
infections, with implications for our understanding of infection,
emergence of drug resistance and optimal druginterventions.
In this presentation I will provide an overview of the mathematical
paradigm used to investigate both biological and epidemiological
infectious diseases systems, drawing on case studies from influenza,
malaria and pertussis research. I will conclude with a summary of how
infectious diseases modelling has assisted the Australian government in
developing its pandemic preparedness and response strategies.


Two classes of network structures that enable efficient information transmission 15:10 Fri 7 Sep, 2012 :: B.20 Ingkarni Wardli :: A/Prof Sanming Zhou :: The University of Melbourne
Media...What network topologies should we use in order to achieve efficient information transmission? Of course answer to this question depends on how we measure efficiency of information dissemination. If we measure it by the minimum gossiping time under the storeandforward, allport and fullduplex model, we show that certain Cayley graphs associated with Frobenius groups are `perfect' in a sense. (A Frobenius group is a permutation group which is transitive but not regular such that only the identity element can fix two points.) Such graphs are also optimal for alltoall routing in the sense that the maximum load on edges achieves the minimum. In this talk we will discuss this theory of optimal network design. 

Electrokinetics of concentrated suspensions of spherical particles 15:10 Fri 28 Sep, 2012 :: B.21 Ingkarni Wardli :: Dr Bronwyn BradshawHajek :: University of South Australia
Electrokinetic techniques are used to gather specific information about concentrated dispersions such as electronic inks, mineral processing slurries, pharmaceutical products and biological fluids (e.g. blood). But, like most experimental techniques, intermediate quantities are measured, and consequently the method relies explicitly on theoretical modelling to extract the quantities of experimental interest. A selfconsistent cellmodel theory of electrokinetics can be used to determine the electrical conductivity of a dense suspension of spherical colloidal particles, and thereby determine the quantities of interest (such as the particle surface potential). The numerical predictions of this model compare well with published experimental results. High frequency asymptotic analysis of the cellmodel leads to some interesting conclusions. 

Rescaling the coalescent 12:30 Mon 8 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr Adam Rohrlach :: University of Adelaide
Media...Recently I gave a short talk about how researchers use mathematics to estimate the time since a species' most recent common ancestor. I also pointed out why this generally doesn't work when a population hasn't had a constant population size. Then I quickly changed the subject. In this talk I aim to reintroduce the Coalescent Model, show how it works in general, and finally how researcher's deal with varying a population size. 

Probability, what can it tell us about health? 13:10 Tue 9 Oct, 2012 :: 7.15 Ingkarni Wardli :: Prof Nigel Bean :: School of Mathematical Sciences
Media...Clinical trials are the way in which modern medical systems test whether individual treatments are worthwhile. Sophisticated statistics is used to try and make the conclusions from clinical trials as meaningful as possible. What can a very simple probability model then tell us about the worth of multiple treatments? What might the implications of this be for the whole health system?
This talk is based on research currently being conducted with a physician at a major Adelaide hospital. It requires no health knowledge and was not tested on animals. All you need is an enquiring and open mind.


Multiscale models of evolutionary epidemiology: where is HIV going? 14:00 Fri 19 Oct, 2012 :: Napier 205 :: Dr Lorenzo Pellis :: The University of Warwick
An important component of pathogen evolution at the population level is evolution within hosts, which can alter the composition of genotypes available for transmission as infection progresses. I will present a deterministic multiscale model, linking the withinhost competition dynamics with the transmission dynamics at a population level. I will take HIV as an example of how this framework can help clarify the conflicting evolutionary pressure an infectious disease might be subject to. 

AD Model Builder and the estimation of lobster abundance 12:10 Mon 22 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr John Feenstra :: University of Adelaide
Media...Determining how many millions of lobsters reside in our waters and how it changes over time is a central aim of lobster stock assessment. ADMB is powerful optimisation software to model and solve complex nonlinear problems using automatic differentiation and plays a major role in SA and worldwide in fisheries stock assessment analyses. In this talk I will provide a brief description of an example modelling problem, key features and use of ADMB. 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

Epidemic models in socially structured populations: when are simple models too simple? 14:00 Thu 25 Oct, 2012 :: 5.56 Ingkarni Wardli :: Dr Lorenzo Pellis :: The University of Warwick
Both age and household structure are recognised as important heterogeneities affecting epidemic spread of infectious pathogens, and many models exist nowadays that include either or both forms of heterogeneity. However, different models may fit aggregate epidemic data equally well and nevertheless lead to different predictions of public health interest. I will here present an overview of stochastic epidemic models with increasing complexity in their social structure, focusing in particular on households models. For these models, I will present recent results about the definition and computation of the basic reproduction number R0 and its relationship with other threshold parameters. Finally, I will use these results to compare models with no, either or both age and household structure, with the aim of quantifying the conditions under which each form of heterogeneity is relevant and therefore providing some criteria that can be used to guide model design for realtime predictions. 

Thinfilm flow in helicallywound channels with small torsion 15:10 Fri 26 Oct, 2012 :: B.21 Ingkarni Wardli :: Dr Yvonne Stokes :: University of Adelaide
The study of flow in open helicallywound channels has application to many natural and industrial flows. We will consider laminar flow down helicallywound channels of rectangular cross section and with small torsion, in which the fluid depth is small. Assuming a steadystate flow that is independent of position along the axis of the channel, the flow solution may be determined in the twodimensional cross section of the channel. A thinfilm approximation yields explicit expressions for the fluid velocity in terms of the freesurface shape. The latter satisfies an interesting nonlinear ordinary differential equation that, for a channel of rectangular cross section, has an analytical solution. The predictions of the thinfilm model are shown to be in good agreement with much more computationally intensive solutions of the smallhelixtorsion NavierStokes equations.
This work has particular relevance to spiral particle separators used in the minerals processing industry. Early work on modelling of particleladen thinfilm flow in spiral channels will also be discussed. 

Thinfilm flow in helicallywound channels with small torsion 15:10 Fri 26 Oct, 2012 :: B.21 Ingkarni Wardli :: Dr Yvonne Stokes :: University of Adelaide
The study of flow in open helicallywound channels has application to many natural and industrial flows. We will consider laminar flow down helicallywound channels of rectangular cross section and with small torsion, in which the fluid depth is small. Assuming a steadystate flow that is independent of position along the axis of the channel, the flow solution may be determined in the twodimensional cross section of the channel. A thinfilm approximation yields explicit expressions for the fluid velocity in terms of the freesurface shape. The latter satisfies an interesting nonlinear ordinary differential equation that, for a channel of rectangular cross section, has an analytical solution. The predictions of the thinfilm model are shown to be in good agreement with much more computationally intensive solutions of the smallhelixtorsion NavierStokes equations.
This work has particular relevance to spiral particle separators used in the minerals processing industry. Early work on modelling of particleladen thinfilm flow in spiral channels will also be discussed. 

Spatiotemporally Autoregressive Partially Linear Models with Application to the Housing Price Indexes of the United States 12:10 Mon 12 Nov, 2012 :: B.21 Ingkarni Wardli :: Ms Dawlah Alsulami :: University of Adelaide
Media...We propose a Spatiotemporal Autoregressive Partially Linear Regression ( STARPLR) model for data observed irregularly over space and regularly in time. The model is capable of catching possible non linearity and nonstationarity in space by coefficients to depend on locations. We suggest twostep procedure to estimate both the coefficients and the unknown function, which is readily implemented and can be computed even for large spatiotemoral data sets. As an illustration, we apply our model to analyze the 51 States' House Price Indexes (HPIs) in USA. 

Dynamics of microbial populations from a copper sulphide leaching heap 12:30 Mon 12 Nov, 2012 :: B.21 Ingkarni Wardli :: Ms Susana Soto Rojo :: University of Adelaide
Media...We are interested in the dynamics of the microbial population from a copper sulphide bioleaching heap. The composition of the microbial consortium is closely related to the kinetics of the oxidation processes that lead to copper recovery. Using a nonlinear model, which considers the effect of substrate depletion and incorporates spatial dependence, we analyse adjacent strips correlation, patterns of microbial succession, relevance of pertinent physicchemical parameters and the implications of the absence of barriers between the three lifts of the heap. We also explore how the dynamics of the microbial community relate to the mineral composition of the individual strips of the bioleaching pile. 

Modern trends in dynamo theory 15:10 Fri 16 Nov, 2012 :: B.20 Ingkarni Wardli :: Prof Michael Proctor :: University of Cambridge
Media...Dynamo action is the process by which magnetic fields in astrophysical bodies (and recently, laboratory fluids) are maintained against resistive losses by Faraday induction. For many years a favoured model of this process, known as meanfield electrodynamics, has been widely used to produce tractable models. I shall present a critique of this theory and contrast it it with another dynamo process (small scale dynamo action) that does not, unlike meanfield electrodynamics, rely on broken reflection symmetry or scale separation. Finally, I shall talk about very recent rigorous results concerning the Archontis dynamo, in which the magnetic and velocity fields are closely aligned.


Asymptotic independence of (simple) twodimensional Markov processes 15:10 Fri 1 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Guy Latouche :: Universite Libre de Bruxelles
Media...The onedimensional birthand death model is one of the basic processes in applied probability but difficulties appear as one moves to higher dimensions. In the positive recurrent case, the situation is singularly simplified if the stationary distribution has productform. We investigate the conditions under which this property holds, and we show how to use the knowledge to find productform approximations for otherwise unmanageable random walks. This is joint work with Masakiyo Miyazawa and Peter Taylor. 

How fast? Bounding the mixing time of combinatorial Markov chains 15:10 Fri 22 Mar, 2013 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South Wales
Media...A Markov chain is a stochastic process which is "memoryless",
in that the next state of the chain depends only on the current state,
and not on how it got there. It is a classical result that an ergodic
Markov chain has a unique stationary distribution.
However, classical theory does not provide any information on the rate of
convergence to stationarity. Around 30 years ago, the mixing time of
a Markov chain was introduced to measure the number of steps required
before the distribution of the chain is within some small distance of
the stationary distribution. One reason why this is important is that
researchers in areas such as physics and biology use Markov chains to
sample from large sets of interest. Rigorous bounds on the mixing time
of their chain allows these researchers to have confidence in their results.
Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures). 

The boundary conditions for macroscale modelling of a discrete diffusion system with periodic diffusivity 12:10 Mon 29 Apr, 2013 :: B.19 Ingkarni Wardli :: Chen Chen :: University of Adelaide
Media...Many mathematical and engineering problems have a multiscale nature. There are a vast of theories supporting multiscale modelling on infinite domain, such as homogenization theory and centre manifold theory. To date, there are little consideration of the correct boundary conditions to be used at the edge of macroscale model. In this seminar, I will present how to derive macroscale boundary conditions for the diffusion system. 

Filtering Theory in Modelling the Electricity Market 12:10 Mon 6 May, 2013 :: B.19 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide
Media...In mathematical finance, as in many other fields where applied mathematics is a powerful tool, we assume that a model is good enough when it captures different sources of randomness affecting the quantity of interests, which in this case is the electricity prices. The power market is very different from other markets in terms of the randomness sources that can be observed in the prices feature and evolution. We start from suggesting a new model that simulates the electricity prices, this new model is constructed by adding a periodicity term, a jumps terms and a positives mean reverting term. The later term is driven by a nonobservable Markov process. So in order to prices some financial product, we have to use some of the filtering theory to deal with the nonobservable process, these techniques are gaining very much of interest from practitioners and researchers in the field of financial mathematics. 

Neuronal excitability and canards 15:10 Fri 10 May, 2013 :: B.18 Ingkarni Wardli :: A/Prof Martin Wechselberger :: University of Sydney
Media...The notion of excitability was first introduced in an attempt to understand firing properties of neurons. It was Alan Hodgkin who identified three basic types (classes) of excitable axons (integrator, resonator and differentiator) distinguished by their different responses to injected steps of currents of various amplitudes.
Pioneered by Rinzel and Ermentrout, bifurcation theory explains repetitive (tonic) firing patterns for adequate steady inputs in integrator (type I) and resonator (type II) neuronal models. In contrast, the dynamic behavior of differentiator (type III) neurons cannot be explained by standard dynamical systems theory. This third type of excitable neuron encodes a dynamic change in the input and leads naturally to a transient response of the neuron.
In this talk, I will show that "canards"  peculiar mathematical creatures  are well suited to explain the nature of transient responses of neurons due to dynamic (smooth) inputs. I will apply this geometric theory to a simple driven FitzHughNagumo/MorrisLecar type neural model and to a more complicated neural model that describes paradoxical excitation due to propofol anesthesia. 

Progress in the prediction of buoyancyaffected turbulence 15:10 Fri 17 May, 2013 :: B.18 Ingkarni Wardli :: Dr Daniel Chung :: University of Melbourne
Media...Buoyancyaffected turbulence represents a significant challenge to our
understanding, yet it dominates many important flows that occur in the
ocean and atmosphere. The presentation will highlight some recent progress
in the characterisation, modelling and prediction of buoyancyaffected
turbulence using direct and largeeddy simulations, along with implications
for the characterisation of mixing in the ocean and the lowcloud feedback
in the atmosphere. Specifically, direct numerical simulation data of
stratified turbulence will be employed to highlight the importance of
boundaries in the characterisation of turbulent mixing in the ocean. Then,
a subgridscale model that captures the anisotropic character of stratified
mixing will be developed for largeeddy simulation of buoyancyaffected
turbulence. Finally, the subgridscale model is utilised to perform a
systematic largeeddy simulation investigation of the archetypal lowcloud
regimes, from which the link between the lowertropospheric stability
criterion and the cloud fraction interpreted. 

Multiscale modelling couples patches of wavelike simulations 12:10 Mon 27 May, 2013 :: B.19 Ingkarni Wardli :: Meng Cao :: University of Adelaide
Media...A multiscale model is proposed to significantly reduce the expensive numerical simulations of complicated waves over large spatial domains. The multiscale model is built from given microscale simulations of complicated physical processes such as sea ice or turbulent shallow water. Our long term aim is to enable macroscale simulations obtained by coupling small patches of simulations together over large physical distances. This initial work explores the coupling of patch simulations of wavelike pdes. With the line of development being to water waves we discuss the dynamics of two complementary fields called the 'depth' h and 'velocity' u. A staggered grid is used for the microscale simulation of the depth h and velocity u. We introduce a macroscale staggered grid to couple the microscale patches. Linear or quadratic interpolation provides boundary conditions on the field in each patch. Linear analysis of the whole coupled multiscale system establishes that the resultant macroscale dynamics is appropriate. Numerical simulations support the linear analysis. This multiscale method should empower the feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. 

Markov decision processes and interval Markov chains: what is the connection? 12:10 Mon 3 Jun, 2013 :: B.19 Ingkarni Wardli :: Mingmei Teo :: University of Adelaide
Media...Markov decision processes are a way to model processes which involve some sort of decision making and interval Markov chains are a way to incorporate uncertainty in the transition probability matrix. How are these two concepts related? In this talk, I will give an overview of these concepts and discuss how they relate to each other. 

Birational geometry of M_g 12:10 Fri 21 Jun, 2013 :: Ingkarni Wardli B19 :: Dr Jarod Alper :: Australian National University
In 1969, Deligne and Mumford introduced a beautiful compactification of the moduli space of smooth curves which has proved extremely influential in geometry, topology and physics. Using recent advances in higher dimensional geometry and the minimal model program, we study the birational geometry of M_g. In particular, in an effort to understand the canonical model of M_g, we study the log canonical models as well as the associated divisorial contractions and flips by interpreting these models as moduli spaces of particular singular curves. 

FireAtmosphere Models 12:10 Mon 29 Jul, 2013 :: B.19 Ingkarni Wardli :: Mika Peace :: University of Adelaide
Media...Fire behaviour models are increasingly being used to assist in planning and operational decisions for bush fires and fuel reduction burns. Rate of spread (ROS) of the fire front is a key output of such models. The ROS value is typically calculated from a formula which has been derived from empirical data, using very simple meteorological inputs. We have used a coupled fireatmosphere model to simulate real bushfire events. The results show that complex interactions between a fire and the atmosphere can have a significant influence on fire spread, thus highlighting the limitations of a model that uses simple meteorological inputs. 

The Hamiltonian Cycle Problem and Markov Decision Processes 15:10 Fri 2 Aug, 2013 :: B.18 Ingkarni Wardli :: Prof Jerzy Filar :: Flinders University
Media...We consider the famous Hamiltonian cycle problem (HCP) embedded in a Markov decision process (MDP). More specifically, we consider a moving object on a graph G where, at each vertex, a controller may select an arc emanating from that vertex according to a probabilistic decision rule. A stationary policy is simply a control where these decision rules are time invariant. Such a policy induces a Markov chain on the vertices of the graph. Therefore, HCP is equivalent to a search for a stationary policy that induces a 01 probability transition matrix whose nonzero entries trace out a Hamiltonian cycle in the graph. A consequence of this embedding is that we may consider the problem over a number of, alternative, convex  rather than discrete  domains. These include: (a) the space of stationary policies, (b) the more restricted but, very natural, space of doubly stochastic matrices induced by the graph, and (c) the associated spaces of socalled "occupational measures". This approach to the HCP has led to both theoretical and algorithmic approaches to the underlying HCP problem. In this presentation, we outline a selection of results generated by this line of research. 

The LowenheimSkolem theorem 12:10 Mon 26 Aug, 2013 :: B.19 Ingkarni Wardli :: William Crawford :: University of Adelaide
Media...For those of us who didn't do an undergrad course in logic, the foundations of set theory are pretty daunting. I will give a run down of some of the basics and then talk about a lesser known, but interesting result; the LowenheimSkolem theorem. One of the consequences of the theorem is that a set can be countable in one model of set theory, while being uncountable in another. 

Knots and Quantum Computation 15:10 Fri 6 Sep, 2013 :: B.18 Ingkarni Wardli :: Dr Scott Morrison :: Australian National University
Media...I'll begin with the Jones polynomial, a knot invariant discovered 30 years ago that radically changed our view of topology. From there, we'll visit the complexity of evaluating the Jones polynomial, the topological field theories related to the Jones polynomial, and how all these ideas come together to offer an unorthodox model for quantum computation. 

Symmetry gaps for geometric structures 15:10 Fri 20 Sep, 2013 :: B.18 Ingkarni Wardli :: Dr Dennis The :: Australian National University
Media...Klein's Erlangen program classified geometries based on their (transitive) groups of symmetries, e.g. Euclidean geometry is the quotient of the rigid motion group by the subgroup of rotations. While this perspective is homogeneous, Riemann's generalization of Euclidean geometry is in general very "lumpy"  i.e. there exist Riemannian manifolds that have no symmetries at all. A common generalization where a group still plays a dominant role is Cartan geometry, which first arose in Cartan's solution to the equivalence problem for geometric structures, and which articulates what a "curved version" of a flat (homogeneous) model means. Parabolic geometries are Cartan geometries modelled on (generalized) flag varieties (e.g. projective space, isotropic Grassmannians) which are wellknown objects from the representation theory of semisimple Lie groups. These curved versions encompass a zoo of interesting geometries, including conformal, projective, CR, systems of 2nd order ODE, etc. This interaction between differential geometry and representation theory has proved extremely fruitful in recent years. My talk will be an examplebased tour of various types of parabolic geometries, which I'll use to outline some of the main aspects of the theory (suppressing technical details). The main thread throughout the talk will be the symmetry gap problem: For a given type of Cartan geometry, the maximal symmetry dimension is realized by the flat model, but what is the next possible ("submaximal") symmetry dimension? I'll sketch a recent solution (in joint work with Boris Kruglikov) for a wide class of parabolic geometries which gives a combinatorial recipe for reading the submaximal symmetry dimension from a Dynkin diagram. 

Controlling disease, one household at a time. 12:10 Mon 23 Sep, 2013 :: B.19 Ingkarni Wardli :: Michael Lydeamore :: University of Adelaide
Pandemics and Epidemics have always caused significant disruption to society. Attempting to model each individual in any reasonable sized population is unfeasible at best, but we can get surprisingly good results just by looking at a single household in a population. In this talk, I'll try to guide you through the logic I've discovered this year, and present some of the key results we've obtained so far, as well as provide a brief indication of what's to come. 

Modelling the South Australian garfish population slice by slice. 12:10 Mon 14 Oct, 2013 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide
Media...In this talk I will provide a taste of how South Australian garfish populations are modelled. The role and importance of garfish 'slices' will be explained and how these help produce important reporting quantities of yearly recruitment, legalsize biomass, and exploitation rate within a framework of an age and length based population model. 

How the leopard got his spots 14:10 Mon 14 Oct, 2013 :: 7.15 Ingkarni Wardli :: Dr Ed Green :: School of Mathematical Sciences
Media...Patterns are everywhere in nature, whether they be the spots and stripes on animals' coats, or the intricate arrangement of different cell types in a tissue. But how do these patterns arise? Whilst every cell contains a plan of the organism in its genes, the cells need to organise themselves so that each knows what it should do to achieve this plan. Mathematics can help biologists explore how different types of signals might be used to control the patterning process. In this talk, I will introduce two simple mathematical theories of biological pattern formation: Turing patterns where, surprisingly, the essential ingredient for producing the pattern is diffusion, which usually tends to make things more uniform; and the KellerSegel model, which provides a simple mechanism for the formation of multicellular structures from isolated single cells. These mathematical models can be used to explain how tissues develop, and why there are many spotted animals with a stripy tail, but no stripy animals with a spotted tail. 

Model Misspecification due to Site Specific Rate Heterogeneity: how is tree inference affected? 12:10 Mon 21 Oct, 2013 :: B.19 Ingkarni Wardli :: Stephen Crotty :: University of Adelaide
Media...In this talk I'll answer none of the questions you ever had about phylogenetics, but hopefully some you didn't. I'll be giving this presentation at a phylogenetics conference in 3 weeks, so sorry it is a little light on background. You've been warned!
Phlyogeneticists have long recognised that different sites in a DNA sequence can experience different rates of nucleotide substitution, and many models have been developed to accommodate this rate heterogeneity. But what happens when a single site exhibits rate heterogeneity along different branches of an evolutionary tree?
In this talk I'll introduce the notion of Site Specific Rate Heterogeneity (SSRH) and investigate a simple case, looking at the impact of SSRH on inference via maximum parsimony, neighbour joining and maximum likelihood. 

Group meeting 15:10 Fri 25 Oct, 2013 :: 5.58 (Ingkarni Wardli) :: Dr Ben Binder and Mr David Wilke :: University of Adelaide
Dr Ben Binder :: 'An inverse approach for solutions to freesurface flow problems'
:: Abstract: Surface water waves are familiar to most people, for example, the wave
pattern generated at the stern of a ship. The boundary or interface
between the air and water is called the freesurface. When determining a
solution to a freesurface flow problem it is commonplace for the forcing
(eg. shape of ship or waterbed topography) that creates the surface waves
to be prescribed, with the freesurface coming as part of the solution.
Alternatively, one can choose to prescribe the shape of the freesurface
and find the forcing inversely. In this talk I will discuss my ongoing
work using an inverse approach to discover new types of solutions to
freesurface flow problems in two and three dimensions, and how the
predictions of the method might be verified with experiments. ::
Mr David Wilke:: 'A Computational Fluid Dynamic Study of Blood Flow Within the Coiled Umbilical Arteries'::
Abstract: The umbilical cord is the lifeline of the fetus throughout gestation. In a normal pregnancy it facilitates the supply of oxygen and nutrients from the placenta via a single vein, in addition to the return of deoxygenated blood from the developing embryo or fetus via two umbilical arteries. Despite the major role it plays in the growth of the fetus, pathologies of the umbilical cord are poorly understood. In particular, variations in the cord geometry, which typically forms a helical arrangement, have been correlated with adverse outcomes in pregnancy. Cords exhibiting either abnormally low or high levels of coiling have been associated with pathological results including growthrestriction and fetal demise. Despite this, the methodology currently employed by clinicians to characterise umbilical pathologies can misdiagnose cords and is prone to error. In this talk a computational model of blood flow within rigid threedimensional structures representative of the umbilical arteries will be presented. This study determined that the current characterization was unable to differentiate between cords which exhibited clinically distinguishable flow properties, including the cord pressure drop, which provides a measure of the loading on the fetal heart.


Modelling and optimisation of group doseresponse challenge experiments 12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide
Media...An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to doseresponse challenge experiments for the bacteria Campylobacter jejuni in chickens. 

A few flavours of optimal control of Markov chains 11:00 Thu 12 Dec, 2013 :: B18 :: Dr Sam Cohen :: Oxford University
Media...In this talk we will outline a general view of optimal control of a continuoustime Markov chain, and how this naturally leads to the theory of Backward Stochastic Differential Equations. We will see how this class of equations gives a natural setting to study these problems, and how we can calculate numerical solutions in many settings. These will include problems with payoffs with memory, with random terminal times, with ergodic and infinitehorizon value functions, and with finite and infinitely many states. Examples will be drawn from finance, networks and electronic engineering. 

Weak Stochastic Maximum Principle (SMP) and Applications 15:10 Thu 12 Dec, 2013 :: B.21 Ingkarni Wardli :: Dr Harry Zheng :: Imperial College, London
Media...In this talk we discuss a weak necessary and sufficient SMP for Markov modulated optimal control problems. Instead of insisting on the maximum condition of the Hamiltonian, we show that 0 belongs to the sum of Clarke's generalized gradient of the Hamiltonian and Clarke's normal cone of the control constraint set at the optimal control. Under a joint concavity condition on the Hamiltonian the necessary condition becomes sufficient. We give examples to demonstrate the weak SMP and its applications in quadratic loss minimization. 

Embed to homogenise heterogeneous wave equation. 12:35 Mon 17 Mar, 2014 :: B.19 Ingkarni Wardli :: Chen Chen :: University of Adelaide
Media...Consider materials with complicated microstructure: we want to model their large scale dynamics by equations with effective, `average' coefficients. I will show an example of heterogeneous wave equation in 1D. If Centre manifold theory is applied to model the original heterogeneous wave equation directly, we will get a trivial model. I embed the wave equation into a family of more complex wave problems and I show the equivalence of the two sets of solutions. 

Viscoelastic fluids: mathematical challenges in determining their relaxation spectra 15:10 Mon 17 Mar, 2014 :: 5.58 Ingkarni Wardli :: Professor Russell Davies :: Cardiff University
Determining the relaxation spectrum of a viscoelastic fluid is a crucial step before a linear or nonlinear constitutive model can be applied. Information about the relaxation spectrum is obtained from simple flow experiments such as creep or oscillatory shear. However, the determination process involves the solution of one or more highly illposed inverse problems. The availability of only discrete data, the presence of noise in the data, as well as incomplete data, collectively make the problem very hard to solve.
In this talk I will illustrate the mathematical challenges inherent in determining relaxation spectra, and also introduce the method of wavelet regularization which enables the representation of a continuous relaxation spectrum by a set of hyperbolic scaling functions.


A model for the BitCoin block chain that takes propagation delays into account 15:10 Fri 28 Mar, 2014 :: B.21 Ingkarni Wardli :: Professor Peter Taylor :: The University of Melbourne
Media...Unlike cash transactions, most electronic transactions require the presence of a trusted authority to verify that the payer has sufficient funding to be able to make the transaction and to adjust the account balances of the payer and payee. In recent years BitCoin has been proposed as an "electronic equivalent of cash". The general idea is that transactions are verified in a coded form in a block chain, which is maintained by the community of participants. Problems can arise when the block chain splits: that is different participants have different versions of the block chain, something which can happen only when there are propagation delays, at least if all participants are behaving according to the protocol.
In this talk I shall present a preliminary model for the splitting behaviour of the block chain. I shall then go on to perform a similar analysis for a situation where a group of participants has adopted a recentlyproposed strategy for gaining a greater advantage from BitCoin processing than its combined computer power should be able to control. 

CARRYING CAPACITY FOR FINFISH AQUACULTURE IN SPENCER GULF: RAPID ASSESSMENT USING HYDRODYNAMIC AND NEARFIELD, SEMI  ANALYTIC SOLUTIONS 15:10 Fri 11 Apr, 2014 :: 5.58 Ingkarni Wardli :: Associate Professor John Middleton :: SARDI Aquatic Sciences and University of Adelaide
Aquaculture farming involves daily feeding of finfish and a subsequent excretion of nutrients into Spencer Gulf. Typically, finfish farming is done in six or so 50m diameter cages and over 600m X 600m lease sites. To help regulate the industry, it is desired that the finfish feed rates and the associated nutrient flux into the ocean are determined such that the maximum nutrient concentration c does not exceed a prescribed value (say cP) for ecosystem health. The prescribed value cP is determined by guidelines from the E.P.A. The concept is known as carrying capacity since limiting the feed rates limits the biomass of the farmed finfish.
Here, we model the concentrations that arise from a constant input flux (F) of nutrients in a source region (the cage or lease) using the (depthaveraged) two dimensional, advection diffusion equation for constant and sinusoidal (tides) currents. Application of the divergence theorem to this equation results in a new scale estimate of the maximum flux F (and thus feed rate) that is given by
F= cP /T* (1)
where cP is the maximum allowed concentration and T* is a new time scale of âflushingâ that involves both advection and diffusion. The scale estimate (1) is then shown to compare favourably with mathematically exact solutions of the advection diffusion equation that are obtained using Greenâs functions and Fourier transforms. The maximum nutrient flux and associated feed rates are then estimated everywhere in Spencer Gulf through the development and validation of a hydrodynamic model. The model provides seasonal averages of the mean currents U and horizontal diffusivities KS that are needed to estimate T*. The diffusivities are estimated from a shear dispersal model of the tides which are very large in the gulf. The estimates have been provided to PIRSA Fisheries and Aquaculture to assist in the sustainable expansion of finfish aquaculture.


Outlier removal using the Bayesian information criterion for groupbased trajectory modelling 12:10 Mon 28 Apr, 2014 :: B.19 Ingkarni Wardli :: Chris Davies :: University of Adelaide
Media...Attributes measured longitudinally can be used to define discrete paths of measurements, or trajectories, for each individual in a given population. Groupbased trajectory modelling methods can be used to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Existing methods generally allocate every individual trajectory into one of the estimated groups. However this does not allow for the possibility that some individuals may be following trajectories so different from the rest of the population that they should not be included in a groupbased trajectory model. This results in these outlying trajectories being treated as though they belong to one of the groups, distorting the estimated trajectory groups and any subsequent analyses that use them.
We have developed an algorithm for removing outlying trajectories based on the maximum change in Bayesian information criterion (BIC) due to removing a single trajectory. As well as deciding which trajectory to remove, the number of groups in the model can also change. The decision to remove an outlying trajectory is made by comparing the loglikelihood contributions of the observations to those of simulated samples from the estimated groupbased trajectory model. In this talk the algorithm will be detailed and an application of its use will be demonstrated. 

A geometric model for odd differential Ktheory 12:10 Fri 9 May, 2014 :: Ingkarni Wardli B20 :: Raymond Vozzo :: University of Adelaide
Odd Ktheory has the interesting property thatunlike even Ktheoryit admits an infinite number of inequivalent differential refinements. In this talk I will give a description of odd differential Ktheory using infinite rank bundles and explain why it is the correct differential refinement. This is joint work with Michael Murray, Pedram Hekmati and Vincent Schlegel. 

Ergodicity and loss of capacity: a stochastic horseshoe? 15:10 Fri 9 May, 2014 :: B.21 Ingkarni Wardli :: Professor Ami Radunskaya :: Pomona College, the United States of America
Media...Random fluctuations of an environment are common in ecological and
economical settings. The resulting processes can be described by a
stochastic dynamical system, where a family of maps parametrized by an
independent, identically distributed random variable forms the basis for a
Markov chain on a continuous state space. Random dynamical systems are a
beautiful combination of deterministic and random processes, and they have
received considerable interest since von Neuman and Ulam's seminal work in
the 1940's. Key questions in the study of a stochastic dynamical system
are: does the system have a welldefined average, i.e. is it ergodic?
How does this longterm behavior compare to that of the state
variable in a constant environment with the averaged parameter?
In this talk we answer these questions for a family of maps on the unit
interval that model selflimiting growth. The techniques used can be
extended to study other families of concave maps, and so we conjecture the
existence of a "stochastic horseshoe". 

Ice floe collisions in the Marginal Ice Zone 12:10 Mon 12 May, 2014 :: B.19 Ingkarni Wardli :: Lucas Yiew :: University of Adelaide
Media...In an era of climate change, it is becoming increasingly important to model the dynamics of seaice cover in the polar regions. The Marginal Ice Zone represents a vast region of ice cover strongly influenced by the effects of ocean waves. As ocean waves penetrate this region, wave energy is progressively dispersed through energy dissipative mechanisms such as collisions between ice floes (discrete chunks of ice). In this talk I will discuss the mathematical models required to build a collision model, and the validation of these models with experimental results. 

Stochastic models of evolution: Trees and beyond 15:10 Fri 16 May, 2014 :: B.18 Ingkarni Wardli :: Dr Barbara Holland :: The University of Tasmania
Media...In the first part of the talk I will give a general introduction to phylogenetics, and discuss some of the mathematical and statistical issues that arise in trying to infer evolutionary trees. In particular, I will discuss how we model the evolution of DNA along a phylogenetic tree using a continuous time Markov process.
In the second part of the talk I will discuss how to express the twostate continuoustime Markov model on phylogenetic trees in such a way that allows its extension to more general models. In this framework we can model convergence of species as well as divergence (speciation). I will discuss the identifiability (or otherwise) of the models that arise in some simple cases. Use of a statistical framework means that we can use established techniques such as the AIC or likelihood ratio tests to decide if datasets show evidence of convergent evolution. 

Group meeting 15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide
Meng Cao:: Multiscale modelling couples patches of nonlinear wavelike simulations ::
Abstract:
The multiscale gaptooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gaptooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gaptooth scheme to the case of nonlinear microscale simulations of wavelike systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigenanalysis indicates that the resultant gaptooth scheme empowers feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dambreaking waves by the gaptooth scheme. Comparison between a gaptooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gaptooth scheme feasibly computes large scale wavelike dynamics with computational savings.
Trent Mattner :: Coupled atmospherefire simulations of the Canberra 2003 bushfires using WRFSfire :: Abstract:
The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fireatmospheretopography interactions that occurred, including leeslope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fireweather simulations of the Canberra fires using WRFSFire. In these simulations, a firebehaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics. 

Modelling the meanfield behaviour of cellular automata 12:10 Mon 4 Aug, 2014 :: B.19 Ingkarni Wardli :: Kale Davies :: University of Adelaide
Media...Cellular automata (CA) are latticebased models in which agents fill the lattice sites and behave according to some specified rule. CA are particularly useful when modelling cell behaviour and as such many people consider CA model in which agents undergo motility and proliferation type events. We are particularly interested in predicting the average behaviour of these models. In this talk I will show how a system of differential equations can be derived for the system and discuss the difficulties that arise in even the seemingly simple case of a CA with motility and proliferation. 

Hydrodynamics and rheology of selfpropelled colloids 15:10 Fri 8 Aug, 2014 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide
The subcellular world has many components in common with soft condensed matter systems (polymers, colloids and liquid crystals). But it has novel properties, not present in traditional complex fluids, arising from a rich spectrum of nonequilibrium behavior: flocking, chemotaxis and bioconvection.
The talk is divided into two parts. In the first half, we will (get an idea on how to) derive a hydrodynamic model for selfpropelled particles of an arbitrary shape from first principles, in a sufficiently dilute suspension limit, moving in a 3dimensional space inside a viscous solvent. The model is then restricted to particles with ellipsoidal geometry to quantify the interplay of the longrange excluded volume and the shortrange selfpropulsion effects. The expression for the constitutive stresses, relating the kinetic theory with the momentum transport equations, are derived using a combination of the virtual work principle (for extra elastic stresses) and symmetry arguments (for active stresses).
The second half of the talk will highlight on my current numerical expertise. In particular we will exploit a specific class of spectral basis functions together with RK4 timestepping to determine the dynamical phases/structures as well as phasetransitions of these ellipsoidal clusters. We will also discuss on how to define the order (or orientation) of these clusters and understand the other rheological quantities.


Software and protocol verification using Alloy 12:10 Mon 25 Aug, 2014 :: B.19 Ingkarni Wardli :: Dinesha Ranathunga :: University of Adelaide
Media...Reliable software isn't achieved by trial and error. It requires tools to support verification. Alloy is a tool based on set theory that allows expression of a logicbased model of software or a protocol, and hence allows checking of this model. In this talk, I will cover its key concepts, language syntax and analysis features. 

Neural Development of the Visual System: a laminar approach 15:10 Fri 29 Aug, 2014 :: N132 Engineering North :: Dr Andrew Oster :: Eastern Washington University
Media...In this talk, we will introduce the architecture of the visual
system in higher order primates and cats. Through activitydependent
plasticity mechanisms, the left and right eye streams segregate in the
cortex in a stripelike manner, resulting in a pattern called an ocular
dominance map. We introduce a mathematical model to study how such a
neural wiring pattern emerges. We go on to consider the joint
development of the ocular dominance map with another feature of the
visual system, the cytochrome oxidase blobs, which appear in the center
of the ocular dominance stripes. Since cortex is in fact comprised of
layers, we introduce a simple laminar model and perform a stability
analysis of the wiring pattern. This intricate biological structure
(ocular dominance stripes with "blobs" periodically distributed in their
centers) can be understood as occurring due to two Turing instabilities
combined with the leadingorder dynamics of the system. 

Neural Development of the Visual System: a laminar approach 15:10 Fri 29 Aug, 2014 :: This talk will now be given as a School Colloquium :: Dr Andrew Oster :: Eastern Washington University
In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activitydependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripelike manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with 'blobs' periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leadingorder dynamics of the system. 

A Random Walk Through Discrete State Markov Chain Theory 12:10 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: James Walker :: University of Adelaide
Media...This talk will go through the basics of Markov chain theory; including how to construct a continuoustime Markov chain (CTMC), how to adapt a Markov chain to include nonmemoryless distributions, how to simulate CTMC's and some key results. 

Inferring absolute population and recruitment of southern rock lobster using only catch and effort data 12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide
Media...Abundance estimates from a datalimited version of catch survey analysis are compared to those from a novel oneparameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex datarich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the datalimited methods is proposed as the most robust approach. A more statistically conventional errorinvariables approach may also be touched upon if enough time. 

Spectral asymptotics on random Sierpinski gaskets 12:10 Fri 26 Sep, 2014 :: Ingkarni Wardli B20 :: Uta Freiberg :: Universitaet Stuttgart
Self similar fractals are often used in modeling porous media. Hence, defining a Laplacian and a Brownian motion on such sets describes transport through such materials. However, the assumption of strict self similarity could be too restricting. So, we present several models of random fractals which could be used instead. After recalling the classical approaches of random homogenous and recursive random fractals, we show how to interpolate between these two model classes with the help of so called Vvariable fractals. This concept (developed by Barnsley, Hutchinson & Stenflo) allows the definition of new families of random fractals, hereby the parameter V describes the degree of `variability' of the realizations. We discuss how the degree of variability influences the geometric, analytic and stochastic properties of these sets.  These results have been obtained with Ben Hambly (University of Oxford) and John Hutchinson (ANU Canberra). 

A Hybrid Markov Model for Disease Dynamics 12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide
Media...Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic. 

Geometric singular perturbation theory and canard theory to study travelling waves in: 1) a model for tumor invasion; and 2) a model for wound healing angiogenesis. 15:10 Fri 17 Oct, 2014 :: EM 218 Engineering & Mathematics Building :: Dr Petrus (Peter) van Heijster :: QUT
In this talk, I will present results on the existence of smooth and shocklike travelling wave solutions for two advectionreactiondiffusion models.
The first model describes malignant tumour (i.e. skin cancer) invasion, while the second one is a model for wound healing angiogenesis.
Numerical solutions indicate that both smooth and shockfronted travelling wave solutions exist for these two models.
I will verify the existence of both type of these solutions using techniques from geometric singular perturbation theory and canard theory.
Moreover, I will provide numerical results on the stability of the waves and the actual observed wave speeds.
This is joint work with K. Harley, G. Pettet, R. Marangell and M. Wechselberger. 

Modelling segregation distortion in multiparent crosses 15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland
Construction of highdensity genetic maps has been made feasible by lowcost highthroughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives.
Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression).
To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for highdensity map construction in large multiparent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eightparent complex cross.


Topology Tomography with Spatial Dependencies 15:00 Tue 25 Nov, 2014 :: Engineering North N132 :: Darryl Veitch :: The University of Melbourne
Media...There has been quite a lot of tomography inference work on measurement networks with a tree topology. Here observations are made, at the leaves of the tree, of `probes' sent down from the root and copied at each branch point. Inference can be performed based on loss or delay information carried by probes, and used in order to recover loss parameters, delay parameters, or the topology, of the tree. In all of these a strong assumption of spatial independence between links in the tree has been made in prior work. I will describe recent work on topology inference, based on loss measurement, which breaks that assumption. In particular I will introduce a new model class for loss with non trivial spatial dependence, the `Jump Independent Models', which are well motivated, and prove that within this class the topology is identifiable. 

Multiscale modelling of multicellular biological systems: mechanics, development and disease 03:10 Fri 6 Mar, 2015 :: Lower Napier LG24 :: Dr James Osborne :: University of Melbourne
When investigating the development and function of multicellular biological systems it is not enough to only consider the behaviour of individual cells in isolation. For example when studying tissue development, how individual cells interact, both mechanically and biochemically, influences the resulting tissues form and function. In this talk we present a multiscale modelling framework for simulating the development and function of multicellular biological systems (in particular tissues). Utilising the natural structural unit of the cell, the framework consists
of three main scales: the tissue level (macroscale); the cell level (mesoscale); and the subcellular level (microscale), with multiple interactions occurring between all scales. The cell level is central to the framework and cells are modelled as discrete interacting entities using one of a number of possible modelling paradigms, including lattice based models (cellular automata and cellular Potts) and offlattice based models (cell centre and vertex based representations). The subcellular level concerns numerous metabolic and biochemical processes represented by interaction networks rendered stochastically or into ODEs. The outputs from such systems influence the behaviour of the cell level affecting properties such as adhesion and also influencing cell mitosis and apoptosis. At the tissue level we consider factors or restraints that influence the cells, for example the distribution of a nutrient or messenger molecule, which is represented by field equations, on a growing domain, with individual cells functioning as
sinks and/or sources. The modular approach taken within the framework enables more realistic behaviour to be considered at each scale.
This framework is implemented within the Open Source Chaste library (Cancer Heart and Soft Tissue Environment, (http://www.cs.ox.ac.uk/chaste/)
and has been used to model biochemical and biomechanical interactions in various biological systems. In this talk we present the key ideas of the framework along with applications within the fields of development and disease. 

A Model to Represent the Propagation of a Wave Over a Bovine Oocyte 12:10 Mon 20 Apr, 2015 :: Napier LG29 :: Amelia Thomas :: University of Adelaide
Media...When the fertilization of egg cells is studied experimentally, generally the cumulus cells surrounding the egg are removed, for easier visualization of the egg itself. However, interesting phenomena are observed in the cumulus cells if they are left intact. In this talk I will present some models that can be used to describe the travelling wavelike movement of the cumulus cells away from the egg cell which occurs postfertilisation. 

A Collision Algorithm for Sea Ice 12:10 Mon 4 May, 2015 :: Napier LG29 :: Lucas Yiew :: University of Adelaide
Media...The waveinduced collisions between sea ice are highly complex and nonlinear, and involves a multitude of subprocesses. Several collision models do exist, however, to date, none of these models have been successfully integrated into seaice forecasting models.
A key component of a collision model is the development of an appropriate collision algorithm. In this seminar I will present a timestepping, eventdriven algorithm to detect, analyse and implement the pre and postcollision processes. 

Haven't I seen you before? Accounting for partnership duration in infectious disease modeling 15:10 Fri 8 May, 2015 :: Level 7 Conference Room Ingkarni Wardli :: Dr Joel Miller :: Monash University
Media...Our ability to accurately predict and explain the spread of an infectious disease is a significant factor in our ability to implement effective interventions. Our ability to accurately model disease spread depends on how accurately we capture the various effects. This is complicated by the fact that infectious disease spread involves a number of time scales. Four that are particularly relevant are: duration of infection in an individual, duration of partnerships between individuals, the time required for an epidemic to spread through the population, and the time required for the population structure to change (demographic or otherwise).
Mathematically simple models of disease spread usually make the implicit assumption that the duration of partnerships is by far the shortest time scale in the system. Thus they miss out on the tendency for infected individuals to deplete their local pool of susceptibles. Depending on the details of the disease in question, this effect may be significant.
I will discuss work done to reduce these assumptions for "SIR" (SusceptibleInfectedRecovered) diseases, which allows us to interpolate between populations which are static and populations which change partners rapidly in closed populations (no entry/exit). I will then discuss early results in applying these methods to diseases such as HIV in which the population time scales are relevant. 

Medical Decision Making 12:10 Mon 11 May, 2015 :: Napier LG29 :: Eka Baker :: University of Adelaide
Media...Practicing physicians make treatment decisions based on clinical trial data every day. This data is based on trials primarily conducted on healthy volunteers, or on those with only the disease in question. In reality, patients do have existing conditions that can affect the benefits and risks associated with receiving these treatments.
In this talk, I will explain how we modified an already existing Markov model to show the progression of treatment of a single condition over time. I will then explain how we adapted this to a different condition, and then created a combined model, which demonstrated how both diseases and treatments progressed on the same patient over their lifetime. 

Dirac operators and Hamiltonian loop group action 12:10 Fri 24 Jul, 2015 :: Engineering and Maths EM212 :: Yanli Song :: University of Toronto
A definition to the geometric quantization for compact Hamiltonian Gspaces is given by Bott, defined as the index of the SpincDirac operator on the manifold. In this talk, I will explain how to generalize this idea to the Hamiltonian LGspaces. Instead of quantizing infinitedimensional manifolds directly, we use its equivalent finitedimensional model, the quasiHamiltonian Gspaces. By constructing twisted spinor bundle and twisted prequantum bundle on the quasiHamiltonian Gspace, we define a Dirac operator whose index are given by positive energy representation of loop groups. A key role in the construction will be played by the algebraic cubic Dirac operator for loop algebra. If time permitted, I will also explain how to prove the quantization commutes with reduction theorem for Hamiltonian LGspaces under this framework. 

Dynamics on Networks: The role of local dynamics and global networks on hypersynchronous neural activity 15:10 Fri 31 Jul, 2015 :: Ingkarni Wardli B21 :: Prof John Terry :: University of Exeter, UK
Media...Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of mathematical modelling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit.
In the talk we introduce some of these concepts with application to epilepsy, introducing a dynamic network approach to study resting state EEG recordings from a cohort of 35 people with epilepsy and 40 adult controls. Using this framework we demonstrate a strongly significant difference between networks inferred from the background activity of people with epilepsy in comparison to normal controls. Our findings demonstrate that a mathematical model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which may ultimately enable a more appropriate mechanistic stratification of people with epilepsy leading to improved diagnostics and therapeutics. 

In vitro models of colorectal cancer: why and how? 15:10 Fri 7 Aug, 2015 :: B19 Ingkarni Wardli :: Dr Tamsin Lannagan :: Gastrointestinal Cancer Biology Group, University of Adelaide / SAHMRI
1 in 20 Australians will develop colorectal cancer (CRC) and it is the second most common cause of cancer death. Similar to many other cancer types, it is the metastases rather than the primary tumour that are lethal, and prognosis is defined by Ã¢ÂÂhow farÃ¢ÂÂ the tumour has spread at time of diagnosis. Modelling in vivo behavior through rapid and relatively inexpensive in vitro assays would help better target therapies as well as help develop new treatments. One such new in vitro tool is the culture of 3D organoids. Organoids are a biologically stable means of growing, storing and testing treatments against bowel cancer. To this end, we have just set up a human colorectal organoid bank across Australia. This consortium will help us to relate in vitro growth patterns to in vivo behaviour and ultimately in the selection of patients for personalized therapies. Organoid growth, however, is complex. There appears to be variable growth rates and growth patterns. Together with members of the ECMS we recently gained funding to better quantify and model spatial structures in these colorectal organoids. This partnership will aim to directly apply the expertise within the ECMS to patient care. 

Modelling terrorism risk  can we predict future trends? 12:10 Mon 10 Aug, 2015 :: Benham Labs G10 :: Stephen Crotty :: University of Adelaide
Media...As we are all aware, the incidence of terrorism is increasing in the world today. This is confirmed when viewing terrorism events since 1970 as a time series. Can we model this increasing trend and use it to predict terrorism events in the future? Probably not, but we'll give it a go anyway. 

Predicting the Winning Time of a Stage of the Tour de France 12:10 Mon 21 Sep, 2015 :: Benham Labs G10 :: Nic Rebuli :: University of Adelaide
Media...Sports can be lucrative, especially popular ones. But for all of us mere mortals, the only money we will ever glean from sporting events is through gambling (responsibly). When it comes to cycling, people generally choose their favourites based on individual and team performance, throughout the world cycling calendar. But what can be said for the duration of a given stage or the winning time of the highly sort after General Classification? In this talk I discuss a basic model for predicting the winning time of the Tour de France. I then apply this model to predicting the outcome of the 2012 and 2013 Tour de France and discuss the results in context. 

Modelling Directionality in Stationary Geophysical Time Series 12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide
Media...Many time series show directionality inasmuch as plots against time and against timetogo are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with nonGaussian noise. The alternative is to use nonlinear models. We investigate how nonGaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of onestepahead forecast errors from 1901 to 2014. 

Typhoons and Tigers 12:10 Fri 23 Oct, 2015 :: Hughes Lecture Room 322 :: Assoc. Prof. Andrew Metcalfe :: School of Mathematical Sciences
Media...The Sundarbans, situated on the north coast of India and south west Bangladesh, are one of the world's largest mangrove regions (4100 square kilometres). In India, there are over 4 million inhabitants on the deltaic islands in the region. There is a diverse flora and fauna, and it is the only remaining habitat of the Bengal tiger. The Sundarbans is an UNESCO World Heritage Site and International Biodiversity Reserve.
However, the Sundarbans are prone to flooding from the cyclones that regularly develop in the Bay of Bengal. In this talk I shall describe a stochastic model for the flood risk and explain how this can be used to make decisions about flood mitigation strategies and to provide estimates of the increase in flood risk due to rising sea levels and climate change.


Covariant model structures and simplicial localization 12:10 Fri 30 Oct, 2015 :: Ingkarni Wardli B17 :: Danny Stevenson :: The University of Adelaide
Media...This talk will describe some aspects of the theory of quasicategories, in particular the notion of left fbration and the allied covariant model structure. If B is a simplicial set, then I will describe some Quillen equivalences relating the covariant model structure on simplicial sets over B to a certain localization of simplicial presheaves on the simplex category of B. I will show how this leads to a new description of Lurie's simplicial rigidification functor as a hammock localization and describe some applications to Lurie's theory of straightening and unstraightening functors. 

Ocean dynamics of Gulf St Vincent: a numerical study 12:10 Mon 2 Nov, 2015 :: Benham Labs G10 :: Henry Ellis :: University of Adelaide
Media...The aim of this research is to determine the physical dynamics of ocean circulation within Gulf St. Vincent, South Australia, and the exchange of momentum, nutrients, heat, salt and other water properties between the gulf and shelf via Investigator Strait and Backstairs Passage. The project aims to achieve this through the creation of highresolution numerical models, combined with new and historical observations from a moored instrument package, satellite data, and shipboard surveys.
The quasirealistic highresolution models are forced using boundary conditions generated by existing larger scale ROMS models, which in turn are forced at the boundary by a global model, creating a global to regional to local model network. Climatological forcing is done using European Centres for Medium range Weather Forecasting (ECMWF) data sets and is consistent over the regional and local models. A series of conceptual models are used to investigate the relative importance of separate physical processes in addition to fully forced quasirealistic models.
An outline of the research to be undertaken is given:
ÃÂ¢ÃÂÃÂ¢ Connectivity of Gulf St. Vincent with shelf waters including seasonal variation due to wind and thermoclinic patterns;
ÃÂ¢ÃÂÃÂ¢ The role of winter time cooling and formation of eddies in flushing the gulf;
ÃÂ¢ÃÂÃÂ¢ The formation of a temperature front within the gulf during summer time; and
ÃÂ¢ÃÂÃÂ¢ The connectivity and importance of nutrient rich, cool, water upwelling from the Bonney Coast with the gulf via Backstairs Passage during summer time. 

Modelling Coverage in RNA Sequencing 09:00 Mon 9 Nov, 2015 :: Ingkarni Wardli 5.57 :: Arndt von Haeseler :: Max F Perutz Laboratories, University of Vienna
Media...RNA sequencing (RNAseq) is the method of choice for measuring the expression of RNAs in a cell population. In an RNAseq experiment, sequencing the full length of larger RNA molecules requires fragmentation into smaller pieces to be compatible with limited read lengths of most deepsequencing technologies. Unfortunately, the issue of nonuniform coverage across a genomic feature has been a concern in RNAseq and is attributed to preferences for certain fragments in steps of library preparation and sequencing. However, the disparity between the observed nonuniformity of read coverage in RNAseq data and the assumption of expected uniformity elicits a query on the read coverage profile one should expect across a transcript, if there are no biases in the sequencing protocol. We propose a simple model of unbiased fragmentation where we find that the expected coverage profile is not uniform and, in fact, depends on the ratio of fragment length to transcript length. To compare the nonuniformity proposed by our model with experimental data, we extended this simple model to incorporate empirical attributes matching that of the sequenced transcript in an RNAseq experiment. In addition, we imposed an experimentally derived distribution on the frequency at which fragment lengths occur.
We used this model to compare our theoretical prediction with experimental data and with the uniform coverage model. If time permits, we will also discuss a potential application of our model. 

Use of epidemic models in optimal decision making 15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester
Media...Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of householdstratified infection data. A design decision involves making a tradeoff between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: crosssectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameterdesign space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the crosssectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a tradeoff between the number of households sampled and the intensity of followup. Our results broadly support the choices made in existing data collection studies. 

Group meeting 15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide
Title: Stability of freesurface flow over topography
Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in timedependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation.
+ Any items for group discussion 

Group meeting 15:10 Fri 20 Nov, 2015 :: Ingkarni Wardli B17 :: Mr Jack Keeler :: University of East Anglia / University of Adelaide
Title: Stability of freesurface flow over topography
Abstract: The forced KdV equation is used as a model to analyse the wave behaviour on the free surface in response to prescribed topographic forcing. The research involves computing steady solutions using numeric and asymptotic techniques and then analysing the stability of these steady solutions in timedependent calculations. Stability is analysed by computing the eigenvalue spectra of the linearised fKdV operator and by exploiting the Hamiltonian structure of the fKdV. Future work includes analysing the solution space for a corrugated topography and investigating the 3 dimensional problem using the KP equation.
+ Any items for group discussion 

A SemiMarkovian Modeling of Limit Order Markets 13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary
Media...R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events interarrival times (possibly nonexponential) and 2) both the nature of a new book event and its corresponding interarrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bidask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)). 

Connecting withinhost and betweenhost dynamics to understand how pathogens evolve 15:10 Fri 1 Apr, 2016 :: Engineering South S112 :: A/Prof Mark Tanaka :: University of New South Wales
Media...Modern molecular technologies enable a detailed examination of the extent of genetic variation among isolates of bacteria and viruses. Mathematical models can help make inferences about pathogen evolution from such data. Because the evolution of pathogens ultimately occurs within hosts, it is influenced by dynamics within hosts including interactions between pathogens and hosts. Most models of pathogen evolution focus on either the withinhost or the betweenhost level. Here I describe steps towards bridging the two scales. First, I present a model of influenza virus evolution that incorporates withinhost dynamics to obtain the betweenhost rate of molecular substitution as a function of the mutation rate, the withinhost reproduction number and other factors. Second, I discuss a model of viral evolution in which some hosts are immunocompromised, thereby extending opportunities for withinhost virus evolution which then affects populationlevel evolution. Finally, I describe a model of Mycobacterium tuberculosis in which multidrug resistance evolves within hosts and spreads by transmission between hosts. 

Hot tube tau machine 15:10 Fri 15 Apr, 2016 :: B17 Ingkarni Wardli :: Dr Hayden Tronnolone :: University of Adelaide
Abstract: Microstructured optical fibres may be fabricated by first extruding molten material from a die to produce a macroscopic version of the final design, call a preform, and then stretching this to produce a fibre. In this talk I will demonstrate how to couple an existing model of the fluid flow during the extrusion stage to a basic model of the fluid temperature and present some preliminary conclusions. This work is still in progress and is being carried out in collaboration with Yvonne Stokes, Michael Chen and Jonathan Wylie.
(+ Any items for group discussion) 

Mathematical modelling of the immune response to influenza 15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne
Media...The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.
We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of crossreactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short interexposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit crossreactive cellular adaptive immune responses. To account for intersubject as well as intervirus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.


Harmonic analysis of HodgeDirac operators 12:10 Fri 13 May, 2016 :: Eng & Maths EM205 :: Pierre Portal :: Australian National University
Media...When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $D = d+d^*$ remain valid? Even in the model case of a perturbation of the euclidean metric on $\mathbb{R}^n$, this is a difficult question. For instance, the fact that the $L^2$ estimate $\Du\_2 \sim \\sqrt{D^{2}}u\_2$ remains valid for perturbed versions of $D$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $L^p$ estimates for the holomorphic functional calculus of $D$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU).


Algebraic structures associated to Brownian motion on Lie groups 13:10 Thu 16 Jun, 2016 :: Ingkarni Wardli B17 :: Steve Rosenberg :: University of Adelaide / Boston University
Media...In (1+1)d TQFT, products and coproducts are associated to pairs of pants decompositions of Riemann surfaces. We consider a toy model in dimension (0+1) consisting of specific broken paths in a Lie group. The products and coproducts are constructed by a Brownian motion average of holonomy along these paths with respect to a connection on an auxiliary bundle. In the trivial case over the torus, we (seem to) recover the Hopf algebra structure on the symmetric algebra. In the general case, we (seem to) get deformations of this Hopf algebra. This is a preliminary report on joint work with Michael Murray and Raymond Vozzo. 

Probabilistic Meshless Methods for Bayesian Inverse Problems 15:10 Fri 5 Aug, 2016 :: Engineering South S112 :: Dr Chris Oates :: University of Technology Sydney
Media...This talk deals with statistical inverse problems that involve partial differential equations (PDEs) with unknown parameters. Our goal is to account, in a rigorous way, for the impact of discretisation error that is introduced at each evaluation of the likelihood due to numerical solution of the PDE. In the context of meshless methods, the proposed, modelbased approach to discretisation error encourages statistical inferences to be more conservative in the presence of significant solver error. In addition, (i) a principled learningtheoretic approach to minimise the impact of solver error is developed, and (ii) the challenge of nonlinear PDEs is considered. The method is applied to parameter inference problems in which nonnegligible solver error must be accounted for in order to draw valid statistical conclusions. 

Mathematical modelling of social spreading processes 15:10 Fri 19 Aug, 2016 :: Napier G03 :: Prof Hans De Sterck :: Monash University
Media...Social spreading processes are intriguing manifestations of how humans interact and shape each others' lives. There is great interest in improving our understanding of these processes, and the increasing availability of empirical information in the era of big data and online social networks, combined with mathematical and computational modelling techniques, offer compelling new ways to study these processes.
I will first discuss mathematical models for the spread of political revolutions on social networks. The influence of online social networks and social media on the dynamics of the Arab Spring revolutions of 2011 are of particular interest in our work. I will describe a hierarchy of models, starting from agentbased models realized on empirical social networks, and ending up with populationlevel models that summarize the dynamical behaviour of the spreading process. We seek to understand quantitatively how political revolutions may be facilitated by the modern online social networks of social media.
The second part of the talk will describe a populationlevel model for the social dynamics that cause cigarette smoking to spread in a population. Our model predicts that more individualistic societies will show faster adoption and cessation of smoking. Evidence from a newly composed centurylong composite data set on smoking prevalence in 25 countries supports the model, with potential implications for public health interventions around the world.
Throughout the talk, I will argue that important aspects of social spreading processes can be revealed and understood via quantitative mathematical and computational models matched to empirical data.
This talk describes joint work with John Lang and Danny Abrams. 

Predicting turbulence 14:10 Tue 30 Aug, 2016 :: Napier 209 :: Dr Trent Mattner :: School of Mathematical Sciences
Media...Turbulence is characterised by threedimensional unsteady fluid motion over a wide range of spatial and temporal scales. It is important in many problems of technological and scientific interest, such as drag reduction, energy production and climate prediction.
Turbulent flows are governed by the NavierStokes equations, which are a nonlinear system of partial differential equations. Typically, numerical methods are needed to find solutions to these equations. In turbulent flows, however, the resulting computational problem is usually intractable. Filtering or averaging the NavierStokes equations mitigates the computational problem, but introduces new quantities into the equations. Mathematical models of turbulence are needed to estimate these quantities. One promising turbulence model consists of a random collection of fluid vortices, which are themselves approximate solutions of the NavierStokes equations. 

Modelling evolution of postmenopausal human longevity: The Grandmother Hypothesis 15:10 Fri 2 Sep, 2016 :: Napier G03 :: Dr Peter Kim :: University of Sydney
Media...Human postmenopausal longevity makes us unique among primates, but how did it evolve? One explanation, the Grandmother Hypothesis, proposes that as grasslands spread in ancient Africa displacing foods ancestral youngsters could effectively exploit, older females whose fertility was declining left more descendants by subsidizing grandchildren and allowing mothers to have new babies sooner. As more robust elders could help more descendants, selection favoured increased longevity while maintaining the ancestral end of female fertility.
We develop a probabilistic agentbased model that incorporates two sexes and mating, fertilitylongevity tradeoffs, and the possibility of grandmother help. Using this model, we show how the grandmother effect could have driven the evolution of human longevity. Simulations reveal two stable lifehistories, one humanlike and the other like our nearest cousins, the great apes. The probabilistic formulation shows how stochastic effects can slow down and prevent escape from the ancestral condition, and it allows us to investigate the effect of mutation rates on the trajectory of evolution. 

A principled experimental design approach to big data analysis 15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology
Media...Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divideandconquer strategies, by determining efficient subsamples. 

SIR epidemics with stages of infection 12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles
Media...This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.
We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semiMarkov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.
Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk. 

Symmetric functions and quantum integrability 15:10 Fri 30 Sep, 2016 :: Napier G03 :: Dr Paul ZinnJustin :: University of Melbourne/Universite Pierre et Marie Curie
Media...We'll discuss an approach to studying families of symmetric polynomials which is based on ''quantum integrability'', that is, on the use of exactly solvable twodimensional lattice models. We'll first explain the general strategy on the simplest case, namely Schur polynomials, with the introduction of a model of lattice paths (a.k.a. fivevertex model). We'll then discuss recent work (in collaboration with M. Wheeler) that extends this approach to HallLittlewood polynomials and Grothendieck polynomials, and some applications of it. 

PoissonLie Tduality and integrability 11:10 Thu 13 Apr, 2017 :: Engineering & Math EM213 :: Ctirad Klimcik :: AixMarseille University, Marseille
Media...The PoissonLie Tduality relates sigmamodels with target spaces symmetric with respect to mutually dual PoissonLie groups. In the special case if the PoissonLie symmetry reduces to the standard nonAbelian symmetry one of the corresponding mutually dual sigmamodels is the standard principal chiral model which is known to enjoy the property of integrability. A natural question whether this nonAbelian integrability can be lifted to integrability of sigma model dualizable with respect to the general PoissonLie symmetry has been answered in the affirmative by myself in 2008. The corresponding PoissonLie symmetric and integrable model is a oneparameter deformation of the principal chiral model and features a remarkable explicit appearance of the standard YangBaxter operator in the target space geometry. Several distinct integrable deformations of the YangBaxter sigma model have been then subsequently uncovered which turn out to be related by the PoissonLie Tduality to the so called lambdadeformed sigma models. My talk gives a review of these developments some of which found applications in string theory in the framework of the AdS/CFT correspondence. 

Hodge theory on the moduli space of Riemann surfaces 12:10 Fri 5 May, 2017 :: Napier 209 :: Jesse GellRedman :: University of Melbourne
Media...The Hodge theorem on a closed Riemannian manifold identifies the deRham cohomology with the space of harmonic differential forms. Although there are various extensions of the Hodge theorem to singular or complete but noncompact spaces, when there is an identification of L^2 Harmonic forms with a topological feature of the underlying space, it is highly dependent on the nature of infinity (in the noncompact case) or the locus of incompleteness; no unifying theorem treats all cases. We will discuss work toward extending the Hodge theorem to singular Riemannian manifolds where the singular locus is an incomplete cusp edge. These can be pictured locally as a bundle of horns, and they provide a model for the behavior of the WeilPetersson metric on the compactified Riemann moduli space near the interior of a divisor. Joint with J. Swoboda and R. Melrose. 

Graded Ktheory and C*algebras 11:10 Fri 12 May, 2017 :: Engineering North 218 :: Aidan Sims :: University of Wollongong
Media...C*algebras can be regarded, in a very natural way, as noncommutative algebras of continuous functions on topological spaces. The analogy is strong enough that topological Ktheory in terms of formal differences of vector bundles has a direct analogue for C*algebras. There is by now a substantial array of tools out there for computing C*algebraic Ktheory. However, when we want to model physical phenomena, like topological phases of matter, we need to take into account various physical symmetries, some of which are encoded by gradings of C*algebras by the twoelement group. Even the definition of graded C*algebraic Ktheory is not entirely settled, and there are relatively few computational tools out there. I will try to outline what a C*algebra (and a graded C*algebra is), indicate what graded Ktheory ought to look like, and discuss recent work with Alex Kumjian and David Pask linking this with the deep and powerful work of Kasparov, and using this to develop computational tools. 

Serotonin Movement Through the Human Colonic Mucosa 15:10 Fri 19 May, 2017 :: Ingkarni Wardli 5.57 :: Helen Dockrell :: Flinders University / Flinders Medical Centre
The control of gut motility remains poorly defined and this makes it difficult to treat disorders associated with dysmotility in patient populations. Intestinal serotonin can elicit and modulate colonic motor patterns and is released in response to a variety of stimuli including nutrient ingestion and pressure change. I will describe a computational model of intestinal tissue and the predicted movement of serotonin through this tissue by advection and diffusion following pressuredependent release. I have developed this model as a PhD candidate under the supervision of Associate Professor Phil Dinning, Professor Damien Keating and Dr Lukasz Wilendt. 

Probabilistic approaches to human cognition: What can the math tell us? 15:10 Fri 26 May, 2017 :: Engineering South S111 :: Dr Amy Perfors :: School of Psychology, University of Adelaide
Why do people avoid vaccinating their children? Why, in groups, does it seem like the most extreme positions are weighted more highly? On the surface, both of these examples look like instances of nonoptimal or irrational human behaviour. This talk presents preliminary evidence suggesting, however, that in both cases this pattern of behaviour is sensible given certain assumptions about the structure of the world and the nature of beliefs. In the case of vaccination, we model people's choices using expected utility theory. This reveals that their ignorance about the nature of diseases like whooping cough makes them underweight the negative utility attached to contracting such a disease. When that ignorance is addressed, their values and utilities shift. In the case of extreme positions, we use simulations of chains of Bayesian learners to demonstrate that whenever information is propagated in groups, the views of the most extreme learners naturally gain more traction. This effect emerges as the result of basic mathematical assumptions rather than human irrationality. 

Stokes' Phenomenon in Translating Bubbles 15:10 Fri 2 Jun, 2017 :: Ingkarni Wardli 5.57 :: Dr Chris Lustri :: Macquarie University
This study of translating air bubbles in a HeleShaw cell containing viscous fluid reveals the critical role played by surface tension in these systems. The standard zerosurfacetension model of HeleShaw flow predicts that a continuum of bubble solutions exists for arbitrary flow translation velocity. The inclusion of small surface tension, however, eliminates this continuum of solutions, instead producing a discrete, countably infinite family of solutions, each with distinct translation speeds. We are interested in determining this discrete family of solutions, and understanding why only these solutions are permitted.
Studying this problem in the asymptotic limit of small surface tension does not seem to give any particular reason why only these solutions should be selected. It is only by using exponential asymptotic methods to study the Stokesâ structure hidden in the problem that we are able to obtain a complete picture of the bubble behaviour, and hence understand the selection mechanism that only permits certain solutions to exist.
In the first half of my talk, I will explain the powerful ideas that underpin exponential asymptotic techniques, such as analytic continuation and optimal truncation. I will show how they are able to capture behaviour known as Stokes' Phenomenon, which is typically invisible to classical asymptotic series methods. In the second half of the talk, I will introduce the problem of a translating air bubble in a HeleShaw cell, and show that the behaviour can be fully understood by examining the Stokes' structure concealed within the problem. Finally, I will briefly showcase other important physical applications of exponential asymptotic methods, including submarine waves and particle chains. 

An action of the GrothendieckTeichmuller group on stable curves of genus zero 11:10 Fri 22 Sep, 2017 :: Engineering South S111 :: Marcy Robertson :: University of Melbourne
Media...In this talk, we show that the group of homotopy automorphisms of the profinite completion of the framed little 2discs operad is isomorphic to the (profinite) GrothendieckTeichmuller group. We deduce that the GrothendieckTeichmuller group acts nontrivially on an operadic model of the genus zero Teichmuller tower. This talk will be aimed at a general audience and will not assume previous knowledge of the GrothendieckTeichmuller group or operads. This is joint work with Pedro Boavida and Geoffroy Horel. 

How oligomerisation impacts steady state gradient in a morphogenreceptor system 15:10 Fri 20 Oct, 2017 :: Ingkarni Wardli 5.57 :: Mr Phillip Brown :: University of Adelaide
In developmental biology an important process is cell fate determination, where cells start to differentiate their form and function. This is an element of the broader concept of morphogenesis. It has long been held that cell differentiation can occur by a chemical signal providing positional information to 'undecided' cells. This chemical produces a gradient of concentration that indicates to a cell what path it should develop along. More recently it has been shown that in a particular system of this type, the chemical (protein) does not exist purely as individual molecules, but can exist in multiprotein complexes known as oligomers.
Mathematical modelling has been performed on systems of oligomers to determine if this concept can produce useful gradients of concentration. However, there are wide range of possibilities when it comes to how oligomer systems can be modelled and most of them have not been explored.
In this talk I will introduce a new monomer system and analyse it, before extending this model to include oligomers. A number of oligomer models are proposed based on the assumption that proteins are only produced in their oligomer form and can only break apart once they have left the producing cell. It will be shown that when oligomers are present under these conditions, but only monomers are permitted to bind with receptors, then the system can produce robust, biologically useful gradients for a significantly larger range of model parameters (for instance, degradation, production and binding rates) compared to the monomer system. We will also show that when oligomers are permitted to bind with receptors there is negligible difference compared to the monomer system. 

The Markovian binary tree applied to demography and conservation biology 15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne
Markovian binary trees form a general and tractable class of continuoustime branching processes, which makes them wellsuited for realworld applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology. 

Stochastic Modelling of Urban Structure 11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute
Media...Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the socalled urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doublyintractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England. 

A multiscale approximation of a CahnLarche system with phase separation on the microscale 15:10 Thu 22 Feb, 2018 :: Ingkarni Wardli 5.57 :: Ms Lisa Reischmann :: University of Augsberg
We consider the process of phase separation of a binary system under the influence of mechanical deformation and we derive a mathematical multiscale model, which describes the evolving microstructure taking into account the elastic properties of the involved materials.
Motivated by phaseseparation processes observed in lipid monolayers in filmbalance experiments, the starting point of the model is the CahnHilliard equation coupled with the equations of linear elasticity, the socalled CahnLarche system.
Owing to the fact that the mechanical deformation takes place on a macrosopic scale whereas the phase separation happens on a microscopic level, a multiscale approach is imperative.
We assume the pattern of the evolving microstructure to have an intrinsic length scale associated with it, which, after nondimensionalisation, leads to a scaled model involving a small parameter epsilon>0, which is suitable for periodichomogenisation techniques.
For the full nonlinear problem the socalled homogenised problem is then obtained by letting epsilon tend to zero using the method of asymptotic expansion.
Furthermore, we present a linearised CahnLarche system and use the method of twoscale convergence to obtain the associated limit problem, which turns out to have the same structure as in the nonlinear case, in a mathematically rigorous way. Properties of the limit model will be discussed. 

Calculating optimal limits for transacting credit card customers 15:10 Fri 2 Mar, 2018 :: Horace Lamb 1022 :: Prof Peter Taylor :: University of Melbourne
Credit card users can roughly be divided into `transactors', who pay off their balance each month, and `revolvers', who maintain an outstanding balance, on which they pay substantial interest.
In this talk, we focus on modelling the behaviour of an individual transactor customer. Our motivation is to calculate an optimal credit limit from the bank's point of view. This requires an expression for the expected outstanding balance at the end of a payment period.
We establish a connection with the classical newsvendor model. Furthermore, we derive the Laplace transform of the outstanding balance, assuming that purchases are made according to a marked point process and that there is a simplified balance control policy which prevents all purchases in the rest of the payment period when the credit limit is exceeded. We then use the newsvendor model and our modified model to calculate bounds on the optimal credit limit for the more realistic balance control policy that accepts all purchases that do not exceed the limit.
We illustrate our analysis using a compound Poisson process example and show that the optimal limit scales with the distribution of the purchasing process, while the probability of exceeding the optimal limit remains constant.
Finally, we apply our model to some real credit card purchase data. 

Models, machine learning, and robotics: understanding biological networks 15:10 Fri 16 Mar, 2018 :: Horace Lamb 1022 :: Prof Steve Oliver :: University of Cambridge
The availability of complete genome sequences has enabled the construction of computer models of metabolic networks that may be used to predict the impact of genetic mutations on growth and survival. Both logical and constraintbased models of the metabolic network of the model eukaryote, the ale yeast Saccharomyces cerevisiae, have been available for some time and are continually being improved by the research community. While such models are very successful at predicting the impact of deleting single genes, the prediction of the impact of higher order genetic interactions is a greater challenge. Initial studies of limited gene sets provided encouraging results. However, the availability of comprehensive experimental data for the interactions between genes involved in metabolism demonstrated that, while the models were able to predict the general properties of the genetic interaction network, their ability to predict interactions between specific pairs of metabolic genes was poor. I will examine the reasons for this poor performance and demonstrate ways of improving the accuracy of the models by exploiting the techniques of machine learning and robotics.
The utility of these metabolic models rests on the firm foundations of genome sequencing data. However, there are two major problems with these kinds of network models  there is no dynamics, and they do not deal with the uncertain and incomplete nature of much biological data. To deal with these problems, we have developed the Flexible Nets (FNs) modelling formalism. FNs were inspired by Petri Nets and can deal with missing or uncertain data, incorporate both dynamics and regulation, and also have the potential for model predictive control of biotechnological processes.


Modelling phagocytosis 15:10 Fri 25 May, 2018 :: Horace Lamb 1022 :: Prof Ngamta (Natalie) Thamwattana :: University of Wollongong
Phagocytosis refers to a process in which one cell type fully encloses and consumes unwanted cells,
debris or particulate matter. It plays an important role in immune systems through the destruction of
pathogens and the inhibiting of cancerous cells. In this study, we combine models on cellcell adhesion
and on predatorprey modelling to generate a new model for phagocytosis that is capable of relating
the interaction between cells in both space and time. Numerical results are presented, demonstrating
the behaviours of cells during the process of phagocytosis. 

Tales of Multiple Regression: Informative Missingness, Recommender Systems, and R2D2 15:10 Fri 17 Aug, 2018 :: Napier 208 :: Prof Howard Bondell :: University of Melbourne
In this talk, we briefly discuss two projects tangentially related under the umbrella of highdimensional regression.
The first part of the talk investigates informative missingness in the framework of recommender systems. In this setting, we envision a potential rating for every objectuser pair. The goal of a recommender system is to predict the unobserved ratings in order to recommend an object that the user is likely to rate highly. A typically overlooked piece is that the combinations are not missing at random. For example, in movie ratings, a relationship between the user ratings and their viewing history is expected, as human nature dictates the user would seek out movies that they anticipate enjoying. We model this informative missingness, and place the recommender system in a sharedvariable regression framework which can aid in prediction quality.
The second part of the talk deals with a new class of prior distributions for shrinkage regularization in sparse linear regression, particularly the high dimensional case. Instead of placing a prior on the coefficients themselves, we place a prior on the regression Rsquared. This is then distributed to the coefficients by decomposing it via a Dirichlet Distribution. We call the new prior R2D2 in light of its RSquared Dirichlet Decomposition. Compared to existing shrinkage priors, we show that the R2D2 prior can simultaneously achieve both high prior concentration at zero, as well as heavier tails. These two properties combine to provide a higher degree of shrinkage on the irrelevant coefficients, along with less bias in estimation of the larger signals. 

Bayesian Synthetic Likelihood 15:10 Fri 26 Oct, 2018 :: Napier 208 :: A/Prof Chris Drovandi :: Queensland University of Technology
Complex stochastic processes are of interest in many applied disciplines. However, the likelihood function associated with such models is often computationally intractable, prohibiting standard statistical inference frameworks for estimating model parameters based on data. Currently, the most popular simulationbased parameter estimation method is approximate Bayesian computation (ABC). Despite the widespread applicability and success of ABC, it has some limitations. This talk will describe an alternative approach, called Bayesian synthetic likelihood (BSL), which overcomes some limitations of ABC and can be much more effective in certain classes of applications. The talk will also describe various extensions to the standard BSL approach. This project has been a joint effort with several academic collaborators, postdocs and PhD students. 

The role of microenvironment in regulation of cell infiltration and bortezomibOV therapy in glioblastoma 15:10 Fri 11 Jan, 2019 :: IW 5.57 :: Professor Yangjin Kim :: Konkuk University, South Korea
Tumor microenvironment (TME) plays a critical role in regulation of tumor cell invasion in glioblastoma. Many microenvironmental factors such as extracllular matrix, microglia and astrocytes can either block or enhance this critical infiltration step in brain [4]. Oncolytic viruses such as herpes simplex virus1 (oHSV) are genetically modified to target and kill cancer cells while not harming healthy normal cells and are currently under multiple clinical trials for safety and efficacy [1]. Bortezomib is a peptidebased proteasome inhibitor and is an FDAapproved drug for myeloma and mantle cell lymphoma. Yoo et al (2) have previously demonstrated that bortezomibinduced unfolded protein response (UPR) in many tumor cell lines (glioma, ovarian, and head and neck) upregulated expression of heat shock protein 90 (HSP90), which then enhanced viral replication through promotion of nuclear localization of the viral polymerase in vitro. This led to synergistic tumor cell killing in vitro, and a combination treatment of mice with oHSV and bortezomib showed improved antitumor efficacy in vivo [2]. This combination therapy also increased the surface expression levels of NK cell activating markers and enhanced proinflammatory cytokine secretion. These findings demonstrated that the synergistic interaction between oHSV and bortezomib, a clinically relevant proteasome inhibitor, augments the cancer cell killing and promotes overall therapeutic efficacy. We investigated the role of NK cells in combination therapy with oncolytic virus (OV) and bortezomib. NK cells display rapid and potent immunity to metastasis and hematological cancers, and they overcome immunosuppressive effects of tumor microenvironment. We developed a mathematical model, a system of PDEs, in order to address the question of how the density of NK cells affects the growth of the tumor [3]. We found that the antitumor efficacy increases when the endogenous NKs are depleted, and also when exogenous NK cells are injected into the tumor. We also show that the TME plays a significant role in antitumor efficacy in OV combination therapy, and illustrate the effect of different spatial patterns of OV injection [5]. The results illustrate a possible phenotypic switch within tumor populations in a given microenvironment, and suggest new antiinvasion therapies. These predictions were validated by our in vivo and in vitro experiments.
References
1] Â Kanai R, â¦ Rabkin SD, âOncolytic herpes simplex virus vectors and chemotherapy: are combinatorial strategies more effective for cancer?â, Future Oncology, 6(4), 619â634, 2010. â¨
[2] Â Yoo J, et al., âBortezomibinduced unfolded protein response increases oncolytic hsv1 replication resulting in synergistic antitumor effectâ, Clin Cancer Res , Vol. 20(14), 2014, pp. 37873798. â¨
[3] Â Yangjin Kim,..Balveen Kaur and Avner Friedman, âComplex role of NK cells in regulation of oncolytic virusbortezomib therapyâ, PNAS, 115 (19), pp. 49274932, 2018. â¨
[4] Yangjin Kim, ..Sean Lawler, and Mark Chaplain, âRole of extracellular matrix and microenvironment in regulation of tumor growth and LARmediated invasion in glioblastomaâ, PLoS One, 13(10):e0204865, 2018. â¨
[5] Yangjin Kim, â¦, Hans G. Othmer, âSynergistic effects of bortezomibOV therapy and antiinvasiveâ¨strategies in glioblastoma: A mathematical modelâ, Special issue, submitted, 2018. 
News matching "American option pricing in a Markov chain market m" 
ARC success The School of Mathematical Sciences was again very successful in attracting Australian Research Council funding for 2008. Recipients of ARC Discovery Projects are (with staff from the School highlighted):
Prof NG Bean; Prof PG Howlett; Prof CE Pearce; Prof SC Beecham; Dr AV Metcalfe; Dr JW Boland:
WaterLog  A mathematical model to implement recommendations of The Wentworth Group.
20082010: $645,000
Prof RJ Elliott:
Dynamic risk measures.
(Australian Professorial Fellowship)
20082012: $897,000
Dr MD Finn:
Topological Optimisation of Fluid Mixing.
20082010: $249,000
Prof PG Bouwknegt; Prof M Varghese; A/Prof S Wu:
Dualities in String Theory and Conformal Field Theory in the context of the Geometric Langlands Program.
20082010: $240,000
The latter grant is held through the ANU Posted Wed 26 Sep 07. 

Australian Research Council Discovery Project Successes Congratulations to the following members of the School for their
success in the ARC Discovery Grants which were announced recently.
 A/Prof M Roughan; Prof H Shen $315K Network Management in a World of Secrets
 Prof AJ Roberts; Dr D Strunin $315K
Effective and accurate model dynamics, deterministic and stochastic,
across multiple space and time scales
 A/Prof J Denier; Prof AP Bassom $180K A novel approach to controlling boundarylayer separation
Posted Wed 17 Sep 08. 

Sam Cohen wins prize for best student talk at Aust MS 2009 Congratulations to Mr Sam Cohen, a PhD student within the School, who was awarded the B. H. Neumann Prize for the best student paper at the 2009 meeting of the Australian Mathematical Society for his talk on
Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise. Posted Tue 6 Oct 09. 
Publications matching "American option pricing in a Markov chain market m"Publications 

Hitting probabilities and hitting times for stochastic fluid flows the bounded model Bean, Nigel; O'Reilly, Malgorzata; Taylor, P, Probability in the Engineering and Informational Sciences 23 (121–147) 2009  On Markovmodulated exponentialaffine bond price formulae Elliott, Robert; Siu, T, Applied Mathematical Finance 16 (1–15) 2009  Model dynamics across multiple length and time scales on a spatial multigrid Roberts, Anthony John, Multiscale Modeling & Simulation: a SIAM Interdisciplinary Journal 7 (1525–1548) 2009  A high resolution largescale gaussian random field rainfall model for Australian monthly rainfall Osti, Alexander; Leonard, Michael; Lambert, Martin; Metcalfe, Andrew, Water Down Under 2008, Adelaide 14/04/08  A high resolution spatiotemporal model for single storm events based on radar images Qin, J; Leonard, Michael; Kuczera, George; Thyer, M; Metcalfe, Andrew; Lambert, Martin, Water Down Under 2008, Adelaide 14/04/08  A temporally heterogeneous highresolution largescale gaussian random field model for Australian rainfall Osti, Alexander; Leonard, Michael; Lambert, Martin; Metcalfe, Andrew, 17th IASTED International Conference on Applied Simulation and Modelling, Greece 23/06/08  Large Eddy simulations of a selfsimilar mixing layer using the stretchedvortex subgrid model Mattner, Trent, XXII International Congress of Theoretical and Applied Mechanics, Adelaide 24/08/08  Spatially inhomogeneous poisson model of rainfall across Australia Leonard, Michael; Osti, Alexander; Lambert, Martin; Metcalfe, Andrew, Water Down Under 2008, Adelaide 14/04/08  A self tuning model for risk estimation Elliott, Robert; Filinkov, Alexei, Expert Systems with Applications 34 (1692–1697) 2008  A spacetime NeymanScott rainfall model with defined storm extent Leonard, Michael; Lambert, Martin; Metcalfe, Andrew; Cowpertwait, P, Water Resources Research 44 (9402–9402) 2008  Discretetime expectation maximization algorithms for Markovmodulated poisson processes Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 53 (247–256) 2008  Performance measures of a multilayer Markovian fluid model Bean, Nigel; O'Reilly, Malgorzata, Annals of Operations Research 160 (99–120) 2008  Robust Optimal Portfolio Choice Under Markovian Regimeswitching Model Elliott, Robert; Siu, T, Methodology and Computing in Applied Probability 11 (145–157) 2008  Model dynamics on a multigrid across multiple length and time scales Roberts, Anthony John,  Model subgrid microscale interactions to accurately discretise stochastic partial differential equations. Roberts, Anthony John,  Model turbulent floods with the Smagorinski large eddy closure Roberts, Anthony John; Georgiev, D; Strunin, D,  Pricing Options and Vriance Swaps in MarkovModulated Brownian Markets Elliott, Robert; Swishchuk, A, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 45–68, 2007  Smoothed Parameter Estimation for a Hidden Markov Model of Credit Quality Korolkiewicz, M; Elliott, Robert, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 69–90, 2007  The Term Structure of Interest Rates in a Hidden Markov Setting Elliott, Robert; Wilson, C, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 15–30, 2007  A qualitative Hamiltonian model for human motion Pearce, Charles; Ivancevic, V, 8th International Conference on Nonlinear Functional Analysis and Applications, Seoul, South Korea 09/08/04  In search for an appropriate granularity to model routing policies Muhlbauer, W; Uhlig, S; Fu, BJ; Meulle, M; Maennel, Olaf, Ulrich, Kyoto, Japan 27/08/07  Largeeddy simulations of a turbulent mixing layer using the stretchedvortex subgrid model Mattner, Trent, 16th Australasian Fluid Mechanics Conference, Gold Coast, Australia 03/12/07  Implementing a spacetime rainfall model for the Sydney region Leonard, Michael; Metcalfe, Andrew; Lambert, Martin; Kuczera, George, Water Science and Technology 55 (39–47) 2007  Nonlinear dynamics on centre manifolds describing turbulent floods: komega model Georgiev, D; Roberts, Anthony John; Strunin, D, Discrete and Continuous Dynamical Systems Supplement (419–428) 2007  Building an AStopology model that captures route diversity Muhlbauer, W; Feldmann, A; Maennel, Olaf; Roughan, Matthew; Uhlig, S, sigcomm 2006, Pisa, Italy 11/09/06  A Markov analysis of social learning and adaptation Wheeler, Scott; Bean, Nigel; Gaffney, Janice; Taylor, Peter, Journal of Evolutionary Economics 16 (299–319) 2006  A hidden Markov approach to the forward premium puzzle Elliott, Robert; Han, B, International Journal of Theoretical and Applied Finance 9 (1009–1020) 2006  Capital alloction in insurance: Economic capital and the allocation of the default option value Sherris, M; Van Der Hoek, John, North American Actuarial Journal 10 (39–61) 2006  Datarecursive smoother formulae for partially observed discretetime Markov chains Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006  Efficient simulation of a spacetime NeymanScott rainfall model Leonard, Michael; Metcalfe, Andrew; Lambert, Martin, Water Resources Research 42 (11503–11503) 2006  Mathematical analysis of an extended mumfordshah model for image segmentation Tao, Trevor; Crisp, David; Van Der Hoek, John, Journal of Mathematical Imaging and Vision 24 (327–340) 2006  Option pricing for GARCH models with Markov switching Elliott, Robert; Siu, T; Chan, L, International Journal of Theoretical and Applied Finance 9 (825–841) 2006  Stochastic volatility model with filtering Elliott, Robert; MIao, H, Stochastic Analysis and Applications 24 (661–683) 2006  The effect on survival of early detection of breast cancer in South Australia Tallis, George; Leppard, Phillip; O'Neill, Terence, Model Assisted Statistics and Applications 1 (115–123) 2006  Accurately model the KuramotoSivashinsky dynamics with holistic discretization MacKenzie, T; Roberts, Anthony John, SIAM Journal on Applied Dynamical Systems 5 (365–402) 2006  An accurate and comprehensive model of thin fluid flows with inertia on curved substrates Roberts, Anthony John; Li, Z, Journal of Fluid Mechanics 553 (33–73) 2006  Option Pricing for Pure Jump Processes with Markov Switching Compensators Elliott, Robert, Finance and Stochastics 10 (250–275) 2006  A hydrodynamic model of the incompressible NavierStokes equations for free surface flows Lee, Jong; Teubner, Michael; Nixon, John; Gill, Peter, The XXXI IAHR Congress, Seoul, Korea 11/09/05  New Gaussian mixture state estimation schemes for discrete time hybrid GaussMarkov systems Elliott, Robert; Dufour, F; Malcolm, William, The 2005 American Control Conference, Portland, OR, USA 08/06/05  Simulating catchmentscale monthly rainfall with classes of hidden Markov models Whiting, Julian; Thyer, M; Lambert, Martin; Metcalfe, Andrew, The 29th Hydrology and Water Resources Symposium, Rydges Lakeside, Canberra, Australia 20/02/05  A 3D nonhydrostatic pressure model for small amplitude free surface flows Lee, Jong; Teubner, Michael; Nixon, John; Gill, Peter, International Journal for Numerical Methods in Fluids 50 (649–672) 2005  Development of a 3D nonhydrostatic pressure model for free surface flows Lee, Jong; Teubner, Michael; Nixon, John; Gill, Peter, The ANZIAM Journal  Online fulltext 46 (623–636) 2005  General smoothing formulas for Markovmodulated Poisson observations Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 50 (1123–1134) 2005  Hidden Markov chain filtering for a jump diffusion model Wu, P; Elliott, Robert, Stochastic Analysis and Applications 23 (153–163) 2005  Hidden Markov filter estimation of the occurrence time of an event in a financial market Elliott, Robert; Tsoi, A, Stochastic Analysis and Applications 23 (1165–1177) 2005  Option pricing and Esscher transform under regime switching Elliott, Robert; Chan, L; Siu, T, Annals of Finance 1 (423–432) 2005  Parameter estimation for a regimeswitching meanreverting model with jumps Wu, P; Elliott, Robert, International Journal of Theoretical and Applied Finance 8 (791–806) 2005  Ramaswami's duality and probabilistic algorithms for determining the rate matrix for a structured GI/M/1 Markov chain Hunt, Emma, The ANZIAM Journal 46 (485–493) 2005  Risksensitive filtering and smoothing for continuoustime Markov processes Malcolm, William; Elliott, Robert; James, M, IEEE Transactions on Information Theory 51 (1731–1738) 2005  State and mode estimation for discretetime jump Markov systems Elliott, Robert; Dufour, F; Malcolm, William, Siam Journal on Control and Optimization 44 (1081–1104) 2005  Expression profiling of a myeloid cell line model to identify novel transcription factors influencing myeloid cell differentiation, proliferation and leukaemia Wilkinson, Christopher; Brown, Anna; Kok, Chung; Solomon, Patricia; Goodall, Gregory; Gonda, Thomas; D'Andrea, M, 5th Australian Microarray Conference 2005, Barossa Valley, South Australia 29/09/05  Path integrals in fluctuating markets with a nonGaussian option pricing model Bonnet, Frederic Daniel; Van Der Hoek, John; Allison, Andrew; Abbott, Derek, Noise and fluctuations in econophysics and finance (2005), Austin, Texas, USA 24/05/05  Computer algebra resolves a multitude of microscale interactions to model stochastic partial differential equations Roberts, Anthony John,  A generalized Hamiltonian model for the dynamics of human motion Pearce, Charles; Ivancevic, V, chapter in Contemporary differential equations and applications (Nova Science Publishers) 15–28, 2004  Capital allocation in insurance: Economic capital and the allocation of the default option value Sherris, M; Van Der Hoek, John, 14th International AFIR Colloquium 2004, Boston, Massachusetts, USA 07/11/04  A probabilistic algorithm for finding the rate matrix of a blockGI/M/1 Markov chain Hunt, Emma, The ANZIAM Journal 45 (457–475) 2004  Memory, market stability and the nonlinear cobweb theorem Gaffney, Janice; Pearce, Charles, The ANZIAM Journal 45 (547–555) 2004  Pricing claims on non tradable assets Elliott, Robert; Van Der Hoek, John, Contemporary Mathematics 351 (103–114) 2004  SwiftHohenberg model for magnetoconvection Cox, Stephen; Matthews, P; Pollicott, S, Physical Review E. (Statistical, Nonlinear, and Soft Matter Physics) 69 (0663141–06631414) 2004  The angiographic and clinical benefits of mibefradil in the coronary slow flow phenomenon Beltrame, John; Turner, Stuart; Leslie, S; Solomon, Patricia; Freedman, S; Horowitz, John, Journal of the American College of Cardiology 44 (57–62) 2004  An optimal feedback model for a nonlinear causal system Howlett, P; Torokhti, Anatoli; Pearce, Charles, ICOTA'06, Ballarat, Vic, Australia 09/12/04  Development of NonHomogeneous and Hierarchical Hidden Markov Models for Modelling Monthly Rainfall and Streamflow Time Series Whiting, Julian; Lambert, Martin; Metcalfe, Andrew; Kuczera, George, World Water and Environmental Resources Congress (2004), Salt Lake City, Utah, USA 27/06/04  Second moments of a matrix analytic model of machine maintenance Green, David; Metcalfe, Andrew, IMA International Conference on Modelling in Industrial Maintenance and Reliability (5th: 2004), Salford, United Kingdom 05/04/04  Arbitrage in a Discrete Version of the WickFractional Black Scholes Model Bender, C; Elliott, Robert, Mathematics of Operations Research 29 (935–945) 2004  Robust Mary detection filters and smoothers for continuoustime jump Markov systems Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 49 (1046–1055) 2004  Twozone model of shear dispersion in a channel using centre manifolds Roberts, Anthony John; Strunin, D, Quarterly Journal of Mechanics and Applied Mathematics 57 (363–378) 2004  Arborescences, matrixtrees and the accumulated sojourn time in a Markov process Pearce, Charles; Falzon, L, chapter in Stochastic analysis and applications Volume 3 (Nova Science Publishers) 147–168, 2003  A useful bound for region merging algorithms in a Bayesian model Tao, Trevor; Crisp, David, Computer Science 2003 TwentySixth Australasian Computer Science Conference, Adelaide, Australia 04/02/03  A Probabilistic algorithm for determining the fundamental matrix of a block M/G/1 Markov chain Hunt, Emma, Mathematical and Computer Modelling 38 (1203–1209) 2003  A complete yield curve description of a Markov interest rate model Elliott, Robert; Mamon, R, International Journal of Theoretical and Applied Finance 6 (317–326) 2003  A nonparametric hidden Markov model for climate state identification Lambert, Martin; Whiting, Julian; Metcalfe, Andrew, Hydrology and Earth System Sciences 7 (652–667) 2003  A philosophy for the modelling of realistic nonlinear systems Howlett, P; Torokhti, Anatoli; Pearce, Charles, Proceedings of the American Mathematical Society 132 (353–363) 2003  Effect of environmental fluctuations on the dynamic composition of engineered cartilage: a deterministic model in stochastic environment Saha, Asit; Mazumdar, Jagan; Morsi, Y, IEEE Transactions on NanoBioscience 2 (158–162) 2003  Numerical model of electrical potential within the human head Nixon, John; Rasser, Paul; Teubner, Michael; Clark, C; Bottema, M, International Journal for Numerical Methods in Engineering 56 (2353–2366) 2003  Perpetual American options with fractional Brownian motion Elliott, Robert; Chan, L, Quantitative Finance 3 (1–6) 2003  RMS values for force, stroke and deflection in a quartercar model active suspension with preview Thompson, A; Pearce, Charles, Vehicle System Dynamics 39 (57–75) 2003  Robust parameter estimation for asset price models with Markov modulated volatilities Elliott, Robert; Malcolm, William; Tsoi, A, Journal of Economic Dynamics & Control 27 (1391–1409) 2003  Approximating Spectral invariants of Harper operators on graphs II Varghese, Mathai; Schick, T; Yates, S, Proceedings of the American Mathematical Society 131 (1917–1923) 2003  Using the Hull and White two factor model in bank treasury risk management Elliott, Robert; Van Der Hoek, John, chapter in Mathematical finance  Bachelier Congress 2000. Selected papers from the First World Congress of the Bachelier Finance Society, Paris, June 29July 1, 2000 (SpringerVerlag) 269–280, 2002  A matrix analytic model for machine maintenance Green, David; Metcalfe, Andrew; Swailes, D, MatrixAnalytic Methods: Theory and Applications, Adelaide, Australia 14/07/02  American options with regime switching Buffington, J; Elliott, Robert, International Journal of Theoretical and Applied Finance 5 (497–514) 2002  Antiovine interleukin1 beta monoclonal antibody immunotherapy in an ovine model of Gramnegative septic shock Peake, Sandra; Pierides, J; Leppard, Phillip; Russ, Graeme, Critical Care Medicine 30 (171–181) 2002  Portfolio optimization, hidden Markov models, and technical analysis of P&Fcharts Elliott, Robert; Hinz, J, International Journal of Theoretical and Applied Finance 5 (385–399) 2002  Supporting maintenance strategies using Markov models AlHassan, K; Swailes, D; Chan, J; Metcalfe, Andrew, IMA Journal of Management Mathematics 13 (17–27) 2002  The Orevkov invariant of an affine plane curve Neumann, W; Norbury, Paul, Transactions of the American Mathematical Society 355 (519–538) 2002  A lubrication model of coating flows over a curved substrate in space Roy, R; Roberts, Anthony John; Simpson, M, Journal of Fluid Mechanics 454 (235–261) 2002  A mathematical model of partialthickness burnwound infection by Pseudomonas aeruginosa: Quorum sensing and the buildup to invasion Koerber, Adrian; King, J; Ward, J; Williams, P; Croft, J; Sockett, R, Bulletin of Mathematical Biology 64 (239–259) 2002  An interest rate model with a Markovian mean reverting level Elliott, Robert; Mamon, R, Quantitative Finance 2 (454–458) 2002  Hidden Markov chain filtering for generalised Bessel processes Elliott, Robert; Platen, E, chapter in Stochastics in Finite and Infinite Dimensions  in honor of Gopinath Kallianpur (Birkhauser) 123–143, 2001  Robust Mary detection filters for continuoustime jump Markov systems Elliott, Robert; Malcolm, William, The 40th IEEE Conference on Decision and Control (CDC), Orlando, Florida 04/12/01  Direct computation of the performance index for an optimally controlled active suspension with preview applied to a halfcar model Thompson, A; Pearce, Charles, Vehicle System Dynamics 35 (121–137) 2001  On the existence of a quasistationary measure for a Markov chain Lasserre, J; Pearce, Charles, Annals of Probability 29 (437–446) 2001  Performance index for a preview active suspension applied to a quartercar model Thompson, A; Pearce, Charles, Vehicle System Dynamics 35 (55–66) 2001  Hidden state Markov chain time series models for arid zone hydrology Cigizoglu, K; Adamson, Peter; Lambert, Martin; Metcalfe, Andrew, International Symposium on Water Resources and Environmental Impact Assessment (2001), Istanbul, Turkey 11/07/01  On the application of a polling model with nonzero walk times and priority processing to a medical emergencyroom environment CicinSain, M; Pearce, Charles; Sunde, J, ITI 2001, Pula, Croatia 19/06/01  The cobweb model and a modified genetic algorithm Gaffney, Janice; Parrott, Krystyna Leanne; Pearce, Charles; Salzborn, Franz, chapter in Commerce, Complexity and Evolution (Cambridge University Press) 267–276, 2000  Entropy, Markov information sources and Parrondo games Pearce, Charles, UPoN'99: Second International Conference, Adelaide, Australia 12/07/99  Levelphase independence for GI/M/1type markov chains Latouche, Guy; Taylor, Peter, Journal of Applied Probability 37 (984–998) 2000  Numerical model of electrical potential within a human head Rasser, Paul; Teubner, Michael; Clark, C, The ANZIAM Journal 42 (C1218–C1237) 2000  On the discrete and continuous Miura chain associated with the sixth Painlev equation Nijhoff, F; Joshi, Nalini; Hone, Andrew, Physics Letters A 264 (396–406) 2000 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
