January
2020  M  T  W  T  F  S  S    1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31          

Search the School of Mathematical SciencesPeople matching "Hidden Markov processes"Courses matching "Hidden Markov processes" 
Random Processes III This course introduces students to the fundamental concepts of random processes,
particularly continuoustime Markov chains, and related structures. These are
the essential building blocks of any random system, be it a telecommunications
network, a hospital waiting list or a transport system. They also arise in many
other environments, where you wish to capture the development of some element of
random behaviour over time, such as the state of the surrounding environment. Topics covered are: Continuoustime Markovchains: definition and basic
properties, transient behaviour, the stationary distribution, hitting
probabilities and expected hitting times, reversibility; Basic Queueing Theory:
arrival processes, service time distributions, Little's Law; Point Processes:
Poisson process, properties and generalisations; Renewal Processes:
preliminaries, renewal function, renewal theory and applications, stationary and
delayed renewal processes; Queueing Networks: Kendall's notation, Jackson
networks, mean value analysis; Loss Networks: truncated reversible processes,
circuitswitched networks, reduced load approximations.
More about this course... 
Events matching "Hidden Markov processes" 
Watching evolution in real time; problems and potential research areas.
15:10 Fri 26 May, 2006 :: G08. Mathematics Building University of Adelaide :: Prof Alan Cooper (Federation Fellow)
Recent studies (1) have indicated problems with our
ability to use the genetic distances between species to estimate the
time since their divergence (so called molecular clocks). An
exponential decay curve has been detected in comparisons of closely
related taxa in mammal and bird groups, and rough approximations
suggest that molecular clock calculations may be problematic for the
recent past (eg <1 million years). Unfortunately, this period
encompasses a number of key evolutionary events where estimates of
timing are critical such as modern human evolutionary history, the
domestication of animals and plants, and most issues involved in
conservation biology. A solution (formulated at UA) will be briefly
outlined. A second area of active interest is the recent suggestion
(2) that mitochondrial DNA diversity does not track population size in
several groups, in contrast to standard thinking. This finding has
been interpreted as showing that mtDNA may not be evolving neutrally,
as has long been assumed.
Large ancient DNA datasets provide a means to examine these issues, by
revealing evolutionary processes in real time (3). The data also
provide a rich area for mathematical investigation as temporal
information provides information about several parameters that are
unknown in serial coalescent calculations (4). References: Ho SYW et al. Time dependency of molecular rate estimates and
systematic overestimation of recent divergence
times. Mol. Biol. Evol. 22, 15611568 (2005);
Penny D, Nature 436, 183184 (2005).
 Bazin E., et al. Population size does not influence mitochondrial
genetic diversity in animals. Science 312, 570 (2006);
EyreWalker A. Size does not matter for mitochondrial DNA,
Science 312, 537 (2006).
 Shapiro B, et al. Rise and fall of the Beringian steppe
bison. Science 306: 15611565 (2004);
Chan et al. Bayesian estimation of the timing and severity of a
population bottleneck from ancient DNA. PLoS Genetics, 2 e59
(2006).
 Drummond et al. Measurably evolving populations, Trends in
Ecol. Evol. 18, 481488 (2003);
Drummond et al. Bayesian coalescent inference of past population
dynamics from molecular sequences. Molecular Biology Evolution
22, 118592 (2005).


Alberta Power Prices 15:10 Fri 9 Mar, 2007 :: G08 Mathematics Building University of Adelaide :: Prof. Robert Elliott
Media...The pricing of electricity involves several interesting features. Apart from daily, weekly and seasonal fluctuations, power prices often exhibit large spikes. To some extent this is because electricity cannot be stored. We propose a model for power prices in the Alberta market. This involves a diffusion process modified by a factor related to a Markov chain which describes the number of large generators on line. The model is calibrated and future contracts priced. 

Similarity solutions for surfacetension driven flows 15:10 Fri 14 Mar, 2008 :: LG29 Napier Building University of Adelaide :: Prof John Lister :: Department of Applied Mathematics and Theoretical Physics, University of Cambridge, UK
The breakup of a mass of fluid into drops is a ubiquitous phenomenon in daily life, the natural environment and technology, with common examples including a dripping tap, ocean spray and inkjet printing. It is a feature of many generic industrial processes such as spraying, emulsification, aeration, mixing and atomisation, and is an undesirable feature in coating and fibre spinning. Surfacetension driven pinchoff and the subsequent recoil are examples of finitetime singularities in which the interfacial curvature becomes infinite at the point of disconnection. As a result, the flow near the point of disconnection becomes selfsimilar and independent of initial and farfield conditions. Similarity solutions will be presented for the cases of inviscid and very viscous flow, along with comparison to experiments. In each case, a boundaryintegral representation can be used both to examine the timedependent behaviour and as the basis of a modified Newton scheme for direct solution of the similarity equations. 

Global and Local stationary modelling in finance: Theory and empirical evidence 14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 PantheonSorbonne
To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.
Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.
Now nonstationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This nonstationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).
Thus, using stationary unconditional moments suggest a global stationarity for the model, but using nonstationary unconditional moments or nonstationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.
The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.
1. What kinds of nonstationarity affect the major financial and economic data sets? How to detect them?
2. Local and global stationarities: How are they defined?
3. What is the impact of evidence of nonstationarity on the statistics computed from the global non stationary data sets?
4. How can we analyze data sets in the nonstationary global framework? Does the asymptotic theory work in nonstationary framework?
5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?
These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.


Elliptic equation for diffusionadvection flows 15:10 Fri 15 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Pavel Bedrikovsetsky :: Australian School of Petroleum Science, University of Adelaide.
The standard diffusion equation is obtained by Einstein's method and its generalisation, FokkerPlankKolmogorovFeller theory. The time between jumps in Einstein derivation is constant.
We discuss random walks with residence time distribution, which occurs for flows of solutes and suspensions/colloids in porous media, CO2 sequestration in coal mines, several processes in chemical, petroleum and environmental engineering. The rigorous application of the Einstein's method results in new equation, containing the time and the mixed dispersion terms expressing the dispersion of the particle time steps.
Usually, adding the second time derivative results in additional initial data. For the equation derived, the condition of limited solution when time tends to infinity provides with uniqueness of the Caushy problem solution.
The solution of the pulse injection problem describing a common tracer injection experiment is studied in greater detail. The new theory predicts delay of the maximum of the tracer, compared to the velocity of the flow, while its forward "tail" contains much more particles than in the solution of the classical parabolic (advectiondispersion) equation. This is in agreement with the experimental observations and predictions of the direct simulation.


Probabilistic models of human cognition 15:10 Fri 29 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Daniel Navarro :: School of Psychology, University of Adelaide
Over the last 15 years a fairly substantial psychological literature has developed in which human reasoning and decisionmaking is viewed as the solution to a variety of statistical problems posed by the environments in which we operate. In this talk, I briefly outline the general approach to cognitive modelling that is adopted in this literature, which relies heavily on Bayesian statistics, and introduce a little of the current research in this field. In particular, I will discuss work by myself and others on the statistical basis of how people make simple inductive leaps and generalisations, and the links between these generalisations and how people acquire word meanings and learn new concepts. If time permits, the extensions of the work in which complex concepts may be characterised with the aid of nonparametric Bayesian tools such as Dirichlet processes will be briefly mentioned. 

Free surface Stokes flows with surface tension 15:10 Fri 5 Sep, 2008 :: G03 Napier Building University of Adelaide :: Prof. Darren Crowdy :: Imperial College London
In this talk, we will survey a number of different
free boundary problems involving slow viscous (Stokes) flows
in which surface tension is active on the free boundary. Both steady
and unsteady flows will be considered. Motivating applications
range from industrial processes such as viscous sintering (where
endproducts are formed as a result of the surfacetensiondriven densification
of a compact of smaller particles that are heated in order that they
coalesce) to biological phenomena such as understanding how
organisms swim (i.e. propel themselves) at low Reynolds numbers.
Common to our approach to all these problems will be an
analytical/theoretical treatment of model problems via complex variable methods 
techniques wellknown at infinite Reynolds numbers
but used much less often in the Stokes regime. These model
problems can give helpful insights into the behaviour of the true
physical systems. 

The Mechanics of Nanoscale Devices 15:10 Fri 10 Oct, 2008 :: G03 Napier Building University of Adelaide :: Associate Prof. John Sader :: Department of Mathematics and Statistics, The University of Melbourne
Nanomechanical sensors are often used to measure environmental
changes with extreme sensitivity. Controlling the effects of surfaces and
fluid dissipation presents significant challenges to achieving the
ultimate sensitivity in these devices. In this talk, I will give an
overview of theoretical/experimental work we are undertaking to explore
the underlying physical processes in these systems. The talk will be
general and aimed at introducing some recent developments in the field of
nanomechanical sensors. 

Dispersing and settling populations in biology 15:10 Tue 23 Jun, 2009 :: Napier G03 :: Prof Kerry Landman :: University of Melbourne
Partial differential equations are used to model populations (such as cells, animals or molecules) consisting of individuals that undergo two important processes: dispersal and settling. I will describe some general characteristics of these systems, as well as some of our recent projects. 

Statistical analysis for harmonized development of systemic organs in human fetuses 11:00 Thu 17 Sep, 2009 :: School Board Room :: Prof Kanta Naito :: Shimane University
The growth processes of human babies have been studied
sufficiently in scientific fields, but there have still been many issues
about the developments of human fetus which are not clarified. The aim of
this research is to investigate the developing process of systemic organs of
human fetuses based on the data set of measurements of fetus's bodies and
organs. Specifically, this talk is concerned with giving a mathematical
understanding for the harmonized developments of the organs of human
fetuses. The method to evaluate such harmonies is proposed by the use of the
maximal dilatation appeared in the theory of quasiconformal mapping. 

American option pricing in a Markov chain market model 15:10 Fri 19 Mar, 2010 :: School Board Room :: Prof Robert Elliott :: School of Mathematical Sciences, University of Adelaide
This paper considers a model for asset pricing in a world where
the randomness is modeled by a Markov chain rather than Brownian motion.
In this paper we develop a theory of optimal stopping and related
variational inequalities for American options in this model. A version of
Saigal's Lemma is established and numerical algorithms developed.
This is a joint work with John van der Hoek. 

Modelling of the Human Skin Equivalent 15:10 Fri 26 Mar, 2010 :: Napier 102 :: Prof Graeme Pettet :: Queensland University of Technology
A brief overview will be given of the development of a so called Human Skin Equivalent Construct. This laboratory grown construct can be used for studying growth, response and the repair of human skin subjected to wounding and/or treatment under strictly regulated conditions. Details will also be provided of a series of mathematical models we have developed that describe the dynamics of the Human Skin Equivalent Construct, which can be used to assist in the development of the experimental protocol, and to provide insight into the fundamental processes at play in the growth and development of the epidermis in both healthy and diseased states. 

The mathematics of theoretical inference in cognitive psychology 15:10 Fri 11 Jun, 2010 :: Napier LG24 :: Prof John Dunn :: University of Adelaide
The aim of psychology in general, and of cognitive psychology in particular, is to construct theoretical accounts of mental processes based on observed changes in performance on one or more cognitive tasks. The fundamental problem faced by the researcher is that these mental processes are not directly observable but must be inferred from changes in performance between different experimental conditions. This inference is further complicated by the fact that performance measures may only be monotonically related to the underlying psychological constructs. Statetrace analysis provides an approach to this problem which has gained increasing interest in recent years. In this talk, I explain statetrace analysis and discuss the set of mathematical issues that flow from it. Principal among these are the challenges of statistical inference and an unexpected connection to the mathematics of oriented matroids. 

Some thoughts on wine production 15:05 Fri 18 Jun, 2010 :: School Board Room :: Prof Zbigniew Michalewicz :: School of Computer Science, University of Adelaide
In the modern information era, managers (e.g. winemakers) recognize the
competitive opportunities represented by decisionsupport tools which can
provide a significant cost savings & revenue increases for their businesses.
Wineries make daily decisions on the processing of grapes, from harvest time
(prediction of maturity of grapes, scheduling of equipment and labour, capacity
planning, scheduling of crushers) through tank farm activities (planning and
scheduling of wine and juice transfers on the tank farm) to packaging processes
(bottling and storage activities). As such operation is quite complex, the whole
area is loaded with interesting ORrelated issues. These include the issues of
global vs. local optimization, relationship between prediction and optimization,
operating in dynamic environments, strategic vs. tactical optimization, and
multiobjective optimization & tradeoff analysis. During the talk we address
the above issues; a few realworld applications will be shown and discussed to
emphasize some of the presented material. 

Meteorological drivers of extreme bushfire events in southern Australia 15:10 Fri 2 Jul, 2010 :: Benham Lecture Theatre :: Prof Graham Mills :: Centre for Australian Weather and Climate Research, Melbourne
Bushfires occur regularly during summer in southern Australia, but only a few of these fires become iconic due to their effects, either in terms of loss of life or economic and social cost. Such events include Black Friday (1939), the Hobart fires (1967), Ash Wednesday (1983), the Canberra bushfires (2003), and most recently Black Saturday in February 2009. In most of these events the weather of the day was statistically extreme in terms of heat, (low) humidity, and wind speed, and in terms of antecedent drought. There are a number of reasons for conducting postevent analyses of the meteorology of these events. One is to identify any meteorological circulation systems or dynamic processes occurring on those days that might not be widely or hitherto recognised, to document these, and to develop new forecast or guidance products. The understanding and prediction of such features can be used in the short term to assist in effective management of fires and the safety of firefighters and in the medium range to assist preparedness for the onset of extreme conditions. The results of such studies can also be applied to simulations of future climates to assess the likely changes in frequency of the most extreme fire weather events, and their documentary records provide a resource that can be used for advanced training purposes. In addition, particularly for events further in the past, revisiting these events using reanalysis data sets and contemporary NWP models can also provide insights unavailable at the time of the events.
Over the past few years the Bushfire CRC's Fire Weather and Fire Danger project in CAWCR has studied the mesoscale meteorology of a number of major fire events, including the days of Ash Wednesday 1983, the Dandenong Ranges fire in January 1997, the Canberra fires and the Alpine breakout fires in January 2003, the Lower Eyre Peninsula fires in January 2005 and the Boorabbin fire in December 2007January 2008. Various aspects of these studies are described below, including the structures of dry cold frontal wind changes, the particular character of the cold fronts associated with the most damaging fires in southeastern Australia, and some aspects of how the vertical temperature and humidity structure of the atmosphere may affect the fire weather at the surface.
These studies reveal much about these major events, but also suggest future research directions, and some of these will be discussed.


Modelling of Hydrological Persistence in the MurrayDarling Basin for the Management of Weirs 12:10 Mon 4 Apr, 2011 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of Adelaide
The lakes and weirs along the lower Murray River in Australia are aggregated and
considered as a sequence of five reservoirs. A seasonal Markov chain model for
the system will be implemented, and a stochastic dynamic program will be used to
find optimal release strategies, in terms of expected monetary value (EMV), for
the competing demands on the water resource given the stochastic nature of
inflows. Matrix analytic methods will be used to analyse the system further, and
in particular enable the full distribution of first passage times between any
groups of states to be calculated. The full distribution of first passage times
can be used to provide a measure of the risk associated with optimum EMV
strategies, such as conditional value at risk (CVaR). The sensitivity of the
model, and risk, to changing rainfall scenarios will be investigated. The effect
of decreasing the level of discretisation of the reservoirs will be explored.
Also, the use of matrix analytic methods facilitates the use of hidden states to
allow for hydrological persistence in the inflows. Evidence for hydrological
persistence of inflows to the lower Murray system, and the effect of making
allowance for this, will be discussed. 

Optimal experimental design for stochastic population models 15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane
Markov population processes are popular models for studying a wide range of
phenomena including the spread of disease, the evolution of chemical reactions
and the movements of organisms in population networks (metapopulations). Our
ability to use these models effectively can be limited by our knowledge about
parameters, such as disease transmission and recovery rates in an epidemic.
Recently, there has been interest in devising optimal experimental designs for
stochastic models, so that practitioners can collect data in a manner that
maximises the precision of maximum likelihood estimates of the parameters for
these models. I will discuss some recent work on optimal design for a variety
of population models, beginning with some simple oneparameter models where the
optimal design can be obtained analytically and moving on to more complicated
multiparameter models in epidemiology that involve latent states and
nonexponentially distributed infectious periods. For these more complex
models, the optimal design must be arrived at using computational methods and we
rely on a Gaussian diffusion approximation to obtain analytical expressions for
Fisher's information matrix, which is at the heart of most optimality criteria
in experimental design. I will outline a simple crossentropy algorithm that
can be used for obtaining optimal designs for these models. We will also
explore the improvements in experimental efficiency when using the optimal
design over some simpler designs, such as the design where observations are
spaced equidistantly in time. 

Inference and optimal design for percolation and general random graph models (Part I) 09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂ«mostly populatedÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Inference and optimal design for percolation and general random graph models (Part II) 10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge
The problem of optimal arrangement of nodes of a random weighted graph
is discussed in this workshop. The nodes of graphs under study are fixed, but
their edges are random and established according to the so called
edgeprobability function. This function is assumed to depend on the weights
attributed to the pairs of graph nodes (or distances between them) and a
statistical parameter. It is the purpose of experimentation to make inference on
the statistical parameter and thus to extract as much information about it as
possible. We also distinguish between two different experimentation scenarios:
progressive and instructive designs.
We adopt a utilitybased Bayesian framework to tackle the optimal design problem
for random graphs of this kind. Simulation based optimisation methods, mainly
Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We
study optimal design problem for the inference based on partial observations of
random graphs by employing data augmentation technique. We prove that the
infinitely growing or diminishing node configurations asymptotically represent
the worst node arrangements. We also obtain the exact solution to the optimal
design problem for proximity (geometric) graphs and numerical solution for
graphs with threshold edgeprobability functions.
We consider inference and optimal design problems for finite clusters from bond
percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both
numerical and analytical results for these graphs. We introduce innerouter
plots by deleting some of the lattice nodes and show that the ÃÂÃÂÃÂÃÂ«mostly populatedÃÂÃÂÃÂÃÂ
designs are not necessarily optimal in the case of incomplete observations under
both progressive and instructive design scenarios. Some of the obtained results
may generalise to other lattices. 

Probability density estimation by diffusion 15:10 Fri 10 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Dirk Kroese :: University of Queensland
Media...One of the beautiful aspects of Mathematics is that seemingly
disparate areas can often have deep connections. This talk is about
the fundamental connection between probability density estimation,
diffusion processes, and partial differential equations. Specifically,
we show how to obtain efficient probability density estimators by
solving partial differential equations related to diffusion processes.
This new perspective leads, in combination with Fast Fourier
techniques, to very fast and accurate algorithms for density
estimation. Moreover, the diffusion formulation unifies most of the
existing adaptive smoothing algorithms and provides a natural solution
to the boundary bias of classical kernel density estimators. This talk
covers topics in Statistics, Probability, Applied Mathematics, and
Numerical Mathematics, with a surprise appearance of the theta
function. This is joint work with Zdravko Botev and Joe Grotowski. 

Stochastic models of reaction diffusion 15:10 Fri 17 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Jon Chapman :: Oxford University
Media...We consider two different position jump processes: (i) a random
walk on a lattice (ii) the Euler scheme for the Smoluchowski
differential equation. Both of these reduce to the diffusion equation as the time step
and size of the jump tend to zero.
We consider the problem of adding chemical reactions to these
processes, both at a surface and in the bulk. We show how the
"microscopic" parameters should be chosen to achieve the correct
"macroscopic" reaction rate. This choice is found to depend on
which stochastic model for diffusion is used. 

Modelling computer network topologies through optimisation 12:10 Mon 1 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Rhys Bowden :: University of Adelaide
The core of the Internet is made up of many different computers (called routers) in many different interconnected networks, owned and operated by many different organisations. A popular and important field of study in the past has been "network topology": for instance, understanding which routers are connected to which other routers, or which networks are connected to which other networks; that is, studying and modelling the connection structure of the Internet. Previous study in this area has been plagued by unreliable or flawed experimental data and debate over appropriate models to use. The Internet Topology Zoo is a new source of network data created from the information that network operators make public. In order to better understand this body of network information we would like the ability to randomly generate network topologies resembling those in the zoo. Leveraging previous wisdom on networks produced as a result of optimisation processes, we propose a simple objective function based on possible economic constraints. By changing the relative costs in the objective function we can change the form of the resulting networks, and we compare these optimised networks to a variety of networks found in the Internet Topology Zoo. 

Laplace's equation on multiplyconnected domains 12:10 Mon 29 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Hayden Tronnolone :: University of Adelaide
Various physical processes take place on multiplyconnected domains
(domains with some number of 'holes'), such as the stirring of a fluid
with paddles or the extrusion of material from a die. These systems may
be described by partial differential equations (PDEs). However, standard
numerical methods for solving PDEs are not wellsuited to such examples:
finite difference methods are difficult to implement on
multiplyconnected domains, especially when the boundaries are irregular
or moving, while finite element methods are computationally expensive.
In this talk I will describe a fast and accurate numerical method for
solving certain PDEs on twodimensional multiplyconnected domains,
considering Laplace's equation as an example. This method takes
advantage of complex variable techniques which allow the solution to be
found with spectral accuracy provided the boundary data is smooth. Other
advantages over traditional numerical methods will also be discussed. 

Alignment of time course gene expression data sets using Hidden Markov Models 12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide
Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards.
Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data. 

Estimating disease prevalence in hidden populations 14:05 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford
Estimating disease prevalence in "hidden" populations such as injecting
drug users or men who have sex with men is an important public health
issue. However, traditional designbased estimation methods are
inappropriate because they assume that a list of all members of the
population is available from which to select a sample. Respondent Driven
Sampling (RDS) is a method developed over the last 15 years for sampling
from hidden populations. Similarly to snowball sampling, it leverages the
fact that members of hidden populations are often socially connected to
one another. Although RDS is now used around the world, there are several
common population characteristics which are known to cause estimates
calculated from such samples to be significantly biased. In this talk I'll
discuss the motivation for RDS, as well as some of the recent developments
in methods of estimation. 

The change of probability measure for jump processes 12:10 Mon 28 May, 2012 :: 5.57 Ingkarni Wardli :: Mr Ahmed Hamada :: University of Adelaide
Media...In financial derivatives pricing theory, it is very common to change the probability measure from historical measure "real world" into a RiskNeutral measure as a development of the non arbitrage condition.
Girsanov theorem is the most known example of this technique and is used when prices randomness is modelled by Brownian motions. Other genuine candidates for modelling market randomness that have proved efficiency in recent literature are jump process, so how can a change of measure be performed for such processes?
This talk will address this question by introducing the non arbitrage condition, discussing Girsanov theorem for diffusion and jump processes and presenting a concrete example. 

Model turbulent floods based upon the Smagorinski large eddy closure 12:10 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Meng Cao :: University of Adelaide
Media...Rivers, floods and tsunamis are often very turbulent. Conventional models of such environmental fluids are typically based on depthaveraged inviscid irrotational flow equations. We explore changing such a base to the turbulent Smagorinski large eddy closure. The aim is to more appropriately model the fluid dynamics of such complex environmental fluids by using such a turbulent closure. Large changes in fluid depth are allowed. Computer algebra constructs the slow manifold of the flow in terms of the fluid depth h and the mean turbulent lateral velocities u and v. The major challenge is to deal with the nonlinear stress tensor in the Smagorinski closure. The model integrates the effects of inertia, selfadvection, bed drag, gravitational forcing and turbulent dissipation with minimal assumptions. Although the resultant model is close to established models, the real outcome is creating a sound basis for the modelling so others, in their modelling of more complex situations, can systematically include more complex physical processes. 

Adventures with group theory: counting and constructing polynomial invariants for applications in quantum entanglement and molecular phylogenetics 15:10 Fri 8 Jun, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Jarvis :: The University of Tasmania
Media...In many modelling problems in mathematics and physics, a standard
challenge is dealing with several repeated instances of a system under
study. If linear transformations are involved, then the machinery of
tensor products steps in, and it is the job of group theory to control how
the relevant symmetries lift from a single system, to having many copies.
At the level of group characters, the construction which does this is
called PLETHYSM.
In this talk all this will be contextualised via two case studies:
entanglement invariants for multipartite quantum systems, and Markov
invariants for tree reconstruction in molecular phylogenetics. By the end
of the talk, listeners will have understood why Alice, Bob and Charlie
love Cayley's hyperdeterminant, and they will know why the three squangles
 polynomial beasts of degree 5 in 256 variables, with a modest 50,000
terms or so  can tell us a lot about quartet trees! 

Drawing of Viscous Threads with Temperaturedependent Viscosity 14:10 Fri 10 Aug, 2012 :: Engineering North N218 :: Dr Jonathan Wylie :: City University of Hong Kong
The drawing of viscous threads is important in a wide range of industrial
applications and is a primary manufacturing process in the optical fiber
and textile industries. Most of the materials used in these processes have
viscosities that vary extremely strongly with temperature.
We investigate the role played by viscous heating in the
drawing of viscous threads. Usually, the effects of viscous heating and
inertia are neglected because the parameters that characterize them are
typically very small. However, by performing a detailed theoretical
analysis we surprisingly show that even very small amounts of viscous
heating can lead to a runaway phenomena. On the other hand, inertia
prevents runaway, and the interplay between viscous heating and inertia
results in very complicated dynamics for the system.
Even more surprisingly, in the absence of viscous heating, we find that a
new type of instability can occur when a thread is heated by a radiative
heat source. By analyzing an asymptotic limit of the NavierStokes
equation we provide a theory that describes the nature of this instability
and explains the seemingly counterintuitive behavior.


Towards understanding fundamental interactions for nanotechnology 15:10 Fri 5 Oct, 2012 :: B.20 Ingkarni Wardli :: Dr Doreen Mollenhauer :: MacDiarmid Institute for Advanced Materials and Nanotechnology, Wellington
Media...Multiple simultaneous interactions show unique collective properties that are qualitatively different from properties displayed by their monovalent constituents. Multivalent interactions play an important role for the selforganization of matter, recognition processes and signal transduction. A broad understanding of these interactions is therefore crucial in order to answer central questions and make new developments in the field of biotechnology and material science. In the framework of a joint experimental and theoretical project we study the electronic effects in monovalent and multivalent interactions by doing quantum chemical calculations. The particular interest of our investigations is in organic molecules interacting with gold nanoparticles or graphene. The main purpose is to analyze the nature of multivalent bonding in comparison to monovalent interaction. 

Dynamics of microbial populations from a copper sulphide leaching heap 12:30 Mon 12 Nov, 2012 :: B.21 Ingkarni Wardli :: Ms Susana Soto Rojo :: University of Adelaide
Media...We are interested in the dynamics of the microbial population from a copper sulphide bioleaching heap. The composition of the microbial consortium is closely related to the kinetics of the oxidation processes that lead to copper recovery. Using a nonlinear model, which considers the effect of substrate depletion and incorporates spatial dependence, we analyse adjacent strips correlation, patterns of microbial succession, relevance of pertinent physicchemical parameters and the implications of the absence of barriers between the three lifts of the heap. We also explore how the dynamics of the microbial community relate to the mineral composition of the individual strips of the bioleaching pile. 

Asymptotic independence of (simple) twodimensional Markov processes 15:10 Fri 1 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Guy Latouche :: Universite Libre de Bruxelles
Media...The onedimensional birthand death model is one of the basic processes in applied probability but difficulties appear as one moves to higher dimensions. In the positive recurrent case, the situation is singularly simplified if the stationary distribution has productform. We investigate the conditions under which this property holds, and we show how to use the knowledge to find productform approximations for otherwise unmanageable random walks. This is joint work with Masakiyo Miyazawa and Peter Taylor. 

A multiscale approach to reactiondiffusion processes in domains with microstructure 15:10 Fri 15 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Malte Peter :: University of Augsburg
Media...Reactiondiffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multiscale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation.
The talk will give an introduction to multiscale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed. 

How fast? Bounding the mixing time of combinatorial Markov chains 15:10 Fri 22 Mar, 2013 :: B.18 Ingkarni Wardli :: Dr Catherine Greenhill :: University of New South Wales
Media...A Markov chain is a stochastic process which is "memoryless",
in that the next state of the chain depends only on the current state,
and not on how it got there. It is a classical result that an ergodic
Markov chain has a unique stationary distribution.
However, classical theory does not provide any information on the rate of
convergence to stationarity. Around 30 years ago, the mixing time of
a Markov chain was introduced to measure the number of steps required
before the distribution of the chain is within some small distance of
the stationary distribution. One reason why this is important is that
researchers in areas such as physics and biology use Markov chains to
sample from large sets of interest. Rigorous bounds on the mixing time
of their chain allows these researchers to have confidence in their results.
Bounding the mixing time of combinatorial Markov chains can be a challenge, and there are only a few approaches available. I will discuss the main methods and give examples for each (with pretty pictures). 

Models of cellextracellular matrix interactions in tissue engineering 15:10 Fri 3 May, 2013 :: B.18 Ingkarni Wardli :: Dr Ed Green :: University of Adelaide
Media...Tissue engineers hope in future to be able to grow functional tissues in vitro to replace those that are damaged by injury, disease, or simple wear and tear. They use cell culture methods, such as seeding cells within collagen gels, that are designed to mimic the cells' environment in vivo. Amongst other factors, it is clear that mechanical interactions between cells and the extracellular matrix (ECM) in which they reside play an important role in tissue development. However, the mechanics of the ECM is complex, and at present, its role is only partly understood. In this talk, I will present mathematical models of some simple cellECM interaction problems, and show how they can be used to gain more insight into the processes that regulate tissue development. 

Filtering Theory in Modelling the Electricity Market 12:10 Mon 6 May, 2013 :: B.19 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide
Media...In mathematical finance, as in many other fields where applied mathematics is a powerful tool, we assume that a model is good enough when it captures different sources of randomness affecting the quantity of interests, which in this case is the electricity prices. The power market is very different from other markets in terms of the randomness sources that can be observed in the prices feature and evolution. We start from suggesting a new model that simulates the electricity prices, this new model is constructed by adding a periodicity term, a jumps terms and a positives mean reverting term. The later term is driven by a nonobservable Markov process. So in order to prices some financial product, we have to use some of the filtering theory to deal with the nonobservable process, these techniques are gaining very much of interest from practitioners and researchers in the field of financial mathematics. 

Multiscale modelling couples patches of wavelike simulations 12:10 Mon 27 May, 2013 :: B.19 Ingkarni Wardli :: Meng Cao :: University of Adelaide
Media...A multiscale model is proposed to significantly reduce the expensive numerical simulations of complicated waves over large spatial domains. The multiscale model is built from given microscale simulations of complicated physical processes such as sea ice or turbulent shallow water. Our long term aim is to enable macroscale simulations obtained by coupling small patches of simulations together over large physical distances. This initial work explores the coupling of patch simulations of wavelike pdes. With the line of development being to water waves we discuss the dynamics of two complementary fields called the 'depth' h and 'velocity' u. A staggered grid is used for the microscale simulation of the depth h and velocity u. We introduce a macroscale staggered grid to couple the microscale patches. Linear or quadratic interpolation provides boundary conditions on the field in each patch. Linear analysis of the whole coupled multiscale system establishes that the resultant macroscale dynamics is appropriate. Numerical simulations support the linear analysis. This multiscale method should empower the feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. 

Markov decision processes and interval Markov chains: what is the connection? 12:10 Mon 3 Jun, 2013 :: B.19 Ingkarni Wardli :: Mingmei Teo :: University of Adelaide
Media...Markov decision processes are a way to model processes which involve some sort of decision making and interval Markov chains are a way to incorporate uncertainty in the transition probability matrix. How are these two concepts related? In this talk, I will give an overview of these concepts and discuss how they relate to each other. 

The Hamiltonian Cycle Problem and Markov Decision Processes 15:10 Fri 2 Aug, 2013 :: B.18 Ingkarni Wardli :: Prof Jerzy Filar :: Flinders University
Media...We consider the famous Hamiltonian cycle problem (HCP) embedded in a Markov decision process (MDP). More specifically, we consider a moving object on a graph G where, at each vertex, a controller may select an arc emanating from that vertex according to a probabilistic decision rule. A stationary policy is simply a control where these decision rules are time invariant. Such a policy induces a Markov chain on the vertices of the graph. Therefore, HCP is equivalent to a search for a stationary policy that induces a 01 probability transition matrix whose nonzero entries trace out a Hamiltonian cycle in the graph. A consequence of this embedding is that we may consider the problem over a number of, alternative, convex  rather than discrete  domains. These include: (a) the space of stationary policies, (b) the more restricted but, very natural, space of doubly stochastic matrices induced by the graph, and (c) the associated spaces of socalled "occupational measures". This approach to the HCP has led to both theoretical and algorithmic approaches to the underlying HCP problem. In this presentation, we outline a selection of results generated by this line of research. 

Modelling and optimisation of group doseresponse challenge experiments 12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide
Media...An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to doseresponse challenge experiments for the bacteria Campylobacter jejuni in chickens. 

A gentle introduction to bubble evolution in HeleShaw flows 15:10 Fri 22 Nov, 2013 :: 5.58 (Ingkarni Wardli) :: Dr Scott McCue :: QUT
A HeleShaw cell is easy to make and serves as a fun toy for an applied mathematician to play with. If we inject air into a HeleShaw cell that is otherwise filled with viscous fluid, we can observe a bubble of air growing in size. The process is highly unstable, and the bubble boundary expands in an uneven fashion, leading to striking fingering patterns (look up HeleShaw cell or SaffmanTaylor instability on YouTube). From a mathematical perspective, modelling these HeleShaw flows is interesting because the governing equations are sufficiently ``simple'' that a considerable amount of analytical progress is possible. Indeed, there is no other context in which (genuinely) twodimensional moving boundary problems are so tractable. More generally, HeleShaw flows are important as they serve as prototypes for more complicated (and important) physical processes such as crystal growth and diffusion limited aggregation. I will give an introduction to some of the main ideas and summarise some of my present research in this area.


Buoyancy driven exchange flows in the nearshore regions of lakes and reservoirs 15:10 Mon 2 Dec, 2013 :: 5.58 (Ingkarni Wardli) :: Professor John Patterson :: University of Sydney
Natural convection is the flow driven by differences in density, and is ubiquitous in nature and industry. It is the source of most environmental flows, and is the basis for almost all industrial heat exchange processes. It operates on both massive and micro scales. It is usually considered as a flow driven by temperature gradients, but could equally be from a gradient in any density determining property  salinity is one obvious example. It also depends on gravity; so magnetohydrodynamics becomes relevant as well. One particular interesting and environmentally relevant flow is the exchange flow in the nearshore regions of lakes and reservoirs. This occurs because of the effects of a decreasing depth approaching the shore resulting laterally unequal heat loss and heat gain during the diurnal cooling and heating cycle. This presentation will discuss some of the results obtained by the Natural Convection Group at Sydney University in analytical, numerical and experimental investigations of this mechanism, and the implications for lake water quality. 

A few flavours of optimal control of Markov chains 11:00 Thu 12 Dec, 2013 :: B18 :: Dr Sam Cohen :: Oxford University
Media...In this talk we will outline a general view of optimal control of a continuoustime Markov chain, and how this naturally leads to the theory of Backward Stochastic Differential Equations. We will see how this class of equations gives a natural setting to study these problems, and how we can calculate numerical solutions in many settings. These will include problems with payoffs with memory, with random terminal times, with ergodic and infinitehorizon value functions, and with finite and infinitely many states. Examples will be drawn from finance, networks and electronic engineering. 

Weak Stochastic Maximum Principle (SMP) and Applications 15:10 Thu 12 Dec, 2013 :: B.21 Ingkarni Wardli :: Dr Harry Zheng :: Imperial College, London
Media...In this talk we discuss a weak necessary and sufficient SMP for Markov modulated optimal control problems. Instead of insisting on the maximum condition of the Hamiltonian, we show that 0 belongs to the sum of Clarke's generalized gradient of the Hamiltonian and Clarke's normal cone of the control constraint set at the optimal control. Under a joint concavity condition on the Hamiltonian the necessary condition becomes sufficient. We give examples to demonstrate the weak SMP and its applications in quadratic loss minimization. 

Ergodicity and loss of capacity: a stochastic horseshoe? 15:10 Fri 9 May, 2014 :: B.21 Ingkarni Wardli :: Professor Ami Radunskaya :: Pomona College, the United States of America
Media...Random fluctuations of an environment are common in ecological and
economical settings. The resulting processes can be described by a
stochastic dynamical system, where a family of maps parametrized by an
independent, identically distributed random variable forms the basis for a
Markov chain on a continuous state space. Random dynamical systems are a
beautiful combination of deterministic and random processes, and they have
received considerable interest since von Neuman and Ulam's seminal work in
the 1940's. Key questions in the study of a stochastic dynamical system
are: does the system have a welldefined average, i.e. is it ergodic?
How does this longterm behavior compare to that of the state
variable in a constant environment with the averaged parameter?
In this talk we answer these questions for a family of maps on the unit
interval that model selflimiting growth. The techniques used can be
extended to study other families of concave maps, and so we conjecture the
existence of a "stochastic horseshoe". 

Stochastic models of evolution: Trees and beyond 15:10 Fri 16 May, 2014 :: B.18 Ingkarni Wardli :: Dr Barbara Holland :: The University of Tasmania
Media...In the first part of the talk I will give a general introduction to phylogenetics, and discuss some of the mathematical and statistical issues that arise in trying to infer evolutionary trees. In particular, I will discuss how we model the evolution of DNA along a phylogenetic tree using a continuous time Markov process.
In the second part of the talk I will discuss how to express the twostate continuoustime Markov model on phylogenetic trees in such a way that allows its extension to more general models. In this framework we can model convergence of species as well as divergence (speciation). I will discuss the identifiability (or otherwise) of the models that arise in some simple cases. Use of a statistical framework means that we can use established techniques such as the AIC or likelihood ratio tests to decide if datasets show evidence of convergent evolution. 

Group meeting 15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide
Meng Cao:: Multiscale modelling couples patches of nonlinear wavelike simulations ::
Abstract:
The multiscale gaptooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gaptooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gaptooth scheme to the case of nonlinear microscale simulations of wavelike systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigenanalysis indicates that the resultant gaptooth scheme empowers feasible computation of large scale simulations of wavelike dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dambreaking waves by the gaptooth scheme. Comparison between a gaptooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gaptooth scheme feasibly computes large scale wavelike dynamics with computational savings.
Trent Mattner :: Coupled atmospherefire simulations of the Canberra 2003 bushfires using WRFSfire :: Abstract:
The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fireatmospheretopography interactions that occurred, including leeslope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fireweather simulations of the Canberra fires using WRFSFire. In these simulations, a firebehaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics. 

A Random Walk Through Discrete State Markov Chain Theory 12:10 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: James Walker :: University of Adelaide
Media...This talk will go through the basics of Markov chain theory; including how to construct a continuoustime Markov chain (CTMC), how to adapt a Markov chain to include nonmemoryless distributions, how to simulate CTMC's and some key results. 

A Hybrid Markov Model for Disease Dynamics 12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide
Media...Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic. 

Multiscale modelling of multicellular biological systems: mechanics, development and disease 03:10 Fri 6 Mar, 2015 :: Lower Napier LG24 :: Dr James Osborne :: University of Melbourne
When investigating the development and function of multicellular biological systems it is not enough to only consider the behaviour of individual cells in isolation. For example when studying tissue development, how individual cells interact, both mechanically and biochemically, influences the resulting tissues form and function. In this talk we present a multiscale modelling framework for simulating the development and function of multicellular biological systems (in particular tissues). Utilising the natural structural unit of the cell, the framework consists
of three main scales: the tissue level (macroscale); the cell level (mesoscale); and the subcellular level (microscale), with multiple interactions occurring between all scales. The cell level is central to the framework and cells are modelled as discrete interacting entities using one of a number of possible modelling paradigms, including lattice based models (cellular automata and cellular Potts) and offlattice based models (cell centre and vertex based representations). The subcellular level concerns numerous metabolic and biochemical processes represented by interaction networks rendered stochastically or into ODEs. The outputs from such systems influence the behaviour of the cell level affecting properties such as adhesion and also influencing cell mitosis and apoptosis. At the tissue level we consider factors or restraints that influence the cells, for example the distribution of a nutrient or messenger molecule, which is represented by field equations, on a growing domain, with individual cells functioning as
sinks and/or sources. The modular approach taken within the framework enables more realistic behaviour to be considered at each scale.
This framework is implemented within the Open Source Chaste library (Cancer Heart and Soft Tissue Environment, (http://www.cs.ox.ac.uk/chaste/)
and has been used to model biochemical and biomechanical interactions in various biological systems. In this talk we present the key ideas of the framework along with applications within the fields of development and disease. 

Identifying the Missing Aspects of the ANSI/ISA Best Practices for Security Policy 12:10 Mon 27 Apr, 2015 :: Napier LG29 :: Dinesha Ranathunga :: University of Adelaide
Media...Firewall configuration is a critical activity but it is often conducted manually, which often result in inaccurate, unreliable configurations that leave networks vulnerable to cyber attack. Firewall misconfigurations can have severe consequences in the context of critical infrastructure plants. Internal networks within these plants interconnect valuable industrial control equipment which often control safety critical processes. Security breaches here can result in disruption of critical services, cause severe environmental damage and at worse, loss of human lives.
Automation can make designing firewall configurations less tedious and their deployment more reliable and increasingly costeffective. In this talk I will discuss of our efforts to arrive at a highlevel security policy description based on the ANSI/ISA standard, suitable for automation. In doing do, we identify the missing aspects of the existing best practices and propose solutions. We then apply the corrected best practice specifications to real SCADA firewall configurations and evaluate their usefulness in describing SCADA policies accurately. 

A Collision Algorithm for Sea Ice 12:10 Mon 4 May, 2015 :: Napier LG29 :: Lucas Yiew :: University of Adelaide
Media...The waveinduced collisions between sea ice are highly complex and nonlinear, and involves a multitude of subprocesses. Several collision models do exist, however, to date, none of these models have been successfully integrated into seaice forecasting models.
A key component of a collision model is the development of an appropriate collision algorithm. In this seminar I will present a timestepping, eventdriven algorithm to detect, analyse and implement the pre and postcollision processes. 

Medical Decision Making 12:10 Mon 11 May, 2015 :: Napier LG29 :: Eka Baker :: University of Adelaide
Media...Practicing physicians make treatment decisions based on clinical trial data every day. This data is based on trials primarily conducted on healthy volunteers, or on those with only the disease in question. In reality, patients do have existing conditions that can affect the benefits and risks associated with receiving these treatments.
In this talk, I will explain how we modified an already existing Markov model to show the progression of treatment of a single condition over time. I will then explain how we adapted this to a different condition, and then created a combined model, which demonstrated how both diseases and treatments progressed on the same patient over their lifetime. 

Natural Optimisation (No Artificial Colours, Flavours or Preservatives) 12:10 Mon 21 Sep, 2015 :: Benham Labs G10 :: James Walker :: University of Adelaide
Media...Sometimes nature seems to have the best solutions to complicated optimisation problems. For example ant colonies have a clever way of optimising the amount of food brought to the colony using pheromones, the process of natural selection gives rise to species which are optimally suited to their environment and although this process is not technically natural, for centuries people have been using properties of crystal formation to make steel with optimal properties. In this talk I will discuss nonconvex optimisation and some optimisation methods inspired by natural processes. 

Ocean dynamics of Gulf St Vincent: a numerical study 12:10 Mon 2 Nov, 2015 :: Benham Labs G10 :: Henry Ellis :: University of Adelaide
Media...The aim of this research is to determine the physical dynamics of ocean circulation within Gulf St. Vincent, South Australia, and the exchange of momentum, nutrients, heat, salt and other water properties between the gulf and shelf via Investigator Strait and Backstairs Passage. The project aims to achieve this through the creation of highresolution numerical models, combined with new and historical observations from a moored instrument package, satellite data, and shipboard surveys.
The quasirealistic highresolution models are forced using boundary conditions generated by existing larger scale ROMS models, which in turn are forced at the boundary by a global model, creating a global to regional to local model network. Climatological forcing is done using European Centres for Medium range Weather Forecasting (ECMWF) data sets and is consistent over the regional and local models. A series of conceptual models are used to investigate the relative importance of separate physical processes in addition to fully forced quasirealistic models.
An outline of the research to be undertaken is given:
ÃÂ¢ÃÂÃÂ¢ Connectivity of Gulf St. Vincent with shelf waters including seasonal variation due to wind and thermoclinic patterns;
ÃÂ¢ÃÂÃÂ¢ The role of winter time cooling and formation of eddies in flushing the gulf;
ÃÂ¢ÃÂÃÂ¢ The formation of a temperature front within the gulf during summer time; and
ÃÂ¢ÃÂÃÂ¢ The connectivity and importance of nutrient rich, cool, water upwelling from the Bonney Coast with the gulf via Backstairs Passage during summer time. 

A SemiMarkovian Modeling of Limit Order Markets 13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary
Media...R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events interarrival times (possibly nonexponential) and 2) both the nature of a new book event and its corresponding interarrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bidask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)). 

Mathematical modelling of the immune response to influenza 15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne
Media...The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.
We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of crossreactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short interexposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit crossreactive cellular adaptive immune responses. To account for intersubject as well as intervirus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.


Approaches to modelling cells and remodelling biological tissues 14:10 Wed 10 Aug, 2016 :: Ingkarni Wardli 5.57 :: Professor Helen Byrne :: University of Oxford
Biological tissues are complex structures, whose evolution is characterised by multiple biophysical processes that act across diverse space and time scales. For example, during normal wound healing, fibroblast cells located around the wound margin exert contractile forces to close the wound while those located in the surrounding tissue synthesise new tissue in response to local growth factors and mechanical stress created by wound contraction. In this talk I will illustrate how mathematical modelling can provide insight into such complex processes, taking my inspiration from recent studies of cell migration, vasculogenesis and wound healing. 

Mathematical modelling of social spreading processes 15:10 Fri 19 Aug, 2016 :: Napier G03 :: Prof Hans De Sterck :: Monash University
Media...Social spreading processes are intriguing manifestations of how humans interact and shape each others' lives. There is great interest in improving our understanding of these processes, and the increasing availability of empirical information in the era of big data and online social networks, combined with mathematical and computational modelling techniques, offer compelling new ways to study these processes.
I will first discuss mathematical models for the spread of political revolutions on social networks. The influence of online social networks and social media on the dynamics of the Arab Spring revolutions of 2011 are of particular interest in our work. I will describe a hierarchy of models, starting from agentbased models realized on empirical social networks, and ending up with populationlevel models that summarize the dynamical behaviour of the spreading process. We seek to understand quantitatively how political revolutions may be facilitated by the modern online social networks of social media.
The second part of the talk will describe a populationlevel model for the social dynamics that cause cigarette smoking to spread in a population. Our model predicts that more individualistic societies will show faster adoption and cessation of smoking. Evidence from a newly composed centurylong composite data set on smoking prevalence in 25 countries supports the model, with potential implications for public health interventions around the world.
Throughout the talk, I will argue that important aspects of social spreading processes can be revealed and understood via quantitative mathematical and computational models matched to empirical data.
This talk describes joint work with John Lang and Danny Abrams. 

SIR epidemics with stages of infection 12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles
Media...This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.
We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semiMarkov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.
Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk. 

Lagrangian transport in deterministic flows: from theory to experiment 16:10 Tue 16 May, 2017 :: Engineering North N132 :: Dr Michel Speetjens :: Eindhoven University of Technology
Transport of scalar quantities (e.g. chemical species, nutrients, heat) in deterministic flows is key to a wide range of phenomena and processes in industry and Nature. This encompasses length scales ranging from microns to hundreds of kilometres, and includes systems as diverse as viscous flows in the processing industry, microfluidic flows in labsonachip and porous media, largescale geophysical and environmental flows, physiological and biological flows and even continuum descriptions of granular flows.
Essential to the net transport of a scalar quantity is its advection by the fluid motion. The Lagrangian perspective (arguably) is the most natural way to investigate advection and leans on the fact that fluid trajectories are organized into coherent structures that geometrically determine the advective transport properties. Lagrangian transport is typically investigated via theoretical and computational studies and often concerns idealized flow situations that are difficult (or even impossible) to create in laboratory experiments. However, bridging the gap from theoretical and computational results to realistic flows is essential for their physical meaningfulness and practical relevance. This presentation highlights a number of fundamental Lagrangian transport phenomena and properties in both twodimensional and threedimensional flows and demonstrates their physical validity by way of representative and experimentally realizable flows. 

Stokes' Phenomenon in Translating Bubbles 15:10 Fri 2 Jun, 2017 :: Ingkarni Wardli 5.57 :: Dr Chris Lustri :: Macquarie University
This study of translating air bubbles in a HeleShaw cell containing viscous fluid reveals the critical role played by surface tension in these systems. The standard zerosurfacetension model of HeleShaw flow predicts that a continuum of bubble solutions exists for arbitrary flow translation velocity. The inclusion of small surface tension, however, eliminates this continuum of solutions, instead producing a discrete, countably infinite family of solutions, each with distinct translation speeds. We are interested in determining this discrete family of solutions, and understanding why only these solutions are permitted.
Studying this problem in the asymptotic limit of small surface tension does not seem to give any particular reason why only these solutions should be selected. It is only by using exponential asymptotic methods to study the Stokesâ structure hidden in the problem that we are able to obtain a complete picture of the bubble behaviour, and hence understand the selection mechanism that only permits certain solutions to exist.
In the first half of my talk, I will explain the powerful ideas that underpin exponential asymptotic techniques, such as analytic continuation and optimal truncation. I will show how they are able to capture behaviour known as Stokes' Phenomenon, which is typically invisible to classical asymptotic series methods. In the second half of the talk, I will introduce the problem of a translating air bubble in a HeleShaw cell, and show that the behaviour can be fully understood by examining the Stokes' structure concealed within the problem. Finally, I will briefly showcase other important physical applications of exponential asymptotic methods, including submarine waves and particle chains. 

Exact coherent structures in high speed flows 15:10 Fri 28 Jul, 2017 :: Ingkarni Wardli B17 :: Prof Philip Hall :: Monash University
In recent years, there has been much interest in the relevance of nonlinear solutions of the NavierStokes equations to fully turbulent flows. The solutions must be calculated numerically at moderate Reynolds numbers but in the limit of high Reynolds numbers asymptotic methods can be used to greatly simplify the computational task and to uncover the key physical processes sustaining the nonlinear states. In particular, in confined flows exact coherent structures defining the boundary between the laminar and turbulent attractors can be constructed. In addition, structures which capture the essential physical properties of fully turbulent flows can be found. The extension of the ideas to boundary layer flows and current work attempting to explain the law of the wall will be discussed.


On the fundamental of RayleighTaylor instability and interfacial mixing 15:10 Fri 15 Sep, 2017 :: Ingkarni Wardli B17 :: Prof Snezhana Abarzhi :: University of Western Australia
RayleighTaylor instability (RTI) develops when fluids of different densities are accelerated against their density gradient. Extensive interfacial mixing of the fluids ensues with time. RayleighTaylor (RT) mixing controls a broad variety of processes in fluids, plasmas and materials, in high and low energy density regimes, at astrophysical and atomistic scales. Examples include formation of hot spot in inertial confinement, supernova explosion, stellar and planetary convection, flows in atmosphere and ocean, reactive and supercritical fluids, material transformation under impact and lightmaterial interaction. In some of these cases (e.g. inertial confinement fusion) RT mixing should be tightly mitigated; in some others (e.g. turbulent combustion) it should be strongly enhanced. Understanding the fundamentals of RTI is crucial for achieving a better control of nonequilibrium processes in nature and technology.
Traditionally, it was presumed that RTI leads to uncontrolled growth of smallscale imperfections, singlescale nonlinear dynamics, and extensive mixing that is similar to canonical turbulence. The recent success of the theory and experiments in fluids and plasmas suggests an alternative scenario of RTI evolution. It finds that the interface is necessary for RT mixing to accelerate, the acceleration effects are strong enough to suppress the development of turbulence, and the RT dynamics is multiscale and has significant degree of order.
This talk presents a physicsbased consideration of fundamentals of RTI and RT mixing, and summarizes what is certain and what is not so certain in our knowledge of RTI. The focus question  How to influence the regularization process in RT mixing? We also discuss new opportunities for improvements of predictive modeling capabilities, physical description, and control of RT mixing in fluids, plasmas and materials. 

The Markovian binary tree applied to demography and conservation biology 15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne
Markovian binary trees form a general and tractable class of continuoustime branching processes, which makes them wellsuited for realworld applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology. 

Stochastic Modelling of Urban Structure 11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute
Media...Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the socalled urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doublyintractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England. 

A multiscale approximation of a CahnLarche system with phase separation on the microscale 15:10 Thu 22 Feb, 2018 :: Ingkarni Wardli 5.57 :: Ms Lisa Reischmann :: University of Augsberg
We consider the process of phase separation of a binary system under the influence of mechanical deformation and we derive a mathematical multiscale model, which describes the evolving microstructure taking into account the elastic properties of the involved materials.
Motivated by phaseseparation processes observed in lipid monolayers in filmbalance experiments, the starting point of the model is the CahnHilliard equation coupled with the equations of linear elasticity, the socalled CahnLarche system.
Owing to the fact that the mechanical deformation takes place on a macrosopic scale whereas the phase separation happens on a microscopic level, a multiscale approach is imperative.
We assume the pattern of the evolving microstructure to have an intrinsic length scale associated with it, which, after nondimensionalisation, leads to a scaled model involving a small parameter epsilon>0, which is suitable for periodichomogenisation techniques.
For the full nonlinear problem the socalled homogenised problem is then obtained by letting epsilon tend to zero using the method of asymptotic expansion.
Furthermore, we present a linearised CahnLarche system and use the method of twoscale convergence to obtain the associated limit problem, which turns out to have the same structure as in the nonlinear case, in a mathematically rigorous way. Properties of the limit model will be discussed. 

Models, machine learning, and robotics: understanding biological networks 15:10 Fri 16 Mar, 2018 :: Horace Lamb 1022 :: Prof Steve Oliver :: University of Cambridge
The availability of complete genome sequences has enabled the construction of computer models of metabolic networks that may be used to predict the impact of genetic mutations on growth and survival. Both logical and constraintbased models of the metabolic network of the model eukaryote, the ale yeast Saccharomyces cerevisiae, have been available for some time and are continually being improved by the research community. While such models are very successful at predicting the impact of deleting single genes, the prediction of the impact of higher order genetic interactions is a greater challenge. Initial studies of limited gene sets provided encouraging results. However, the availability of comprehensive experimental data for the interactions between genes involved in metabolism demonstrated that, while the models were able to predict the general properties of the genetic interaction network, their ability to predict interactions between specific pairs of metabolic genes was poor. I will examine the reasons for this poor performance and demonstrate ways of improving the accuracy of the models by exploiting the techniques of machine learning and robotics.
The utility of these metabolic models rests on the firm foundations of genome sequencing data. However, there are two major problems with these kinds of network models  there is no dynamics, and they do not deal with the uncertain and incomplete nature of much biological data. To deal with these problems, we have developed the Flexible Nets (FNs) modelling formalism. FNs were inspired by Petri Nets and can deal with missing or uncertain data, incorporate both dynamics and regulation, and also have the potential for model predictive control of biotechnological processes.


Topological Data Analysis 15:10 Fri 31 Aug, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University
Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape  the way the data are distributed  permits highlevel reasoning and modelling of the processes that created this data. The 0th order aspect of shape is the number pieces: "connected components" to a topologist; "clustering" to a statistician. Higherorder topological aspects of shape are holes, quantified as "nonbounding cycles" in homology theory. These signal the existence of some type of constraint on the datagenerating process.
Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of "noisy" topological structure.
Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection. 

Random walks 15:10 Fri 12 Oct, 2018 :: Napier 208 :: A/Prof Kais Hamza :: Monash University
A random walk is arguably the most basic stochastic process one can define. It is also among the most intuitive objects in the theory of probability and stochastic processes. For these and other reasons, it is one of the most studied processes or rather family of processes, finding applications in all areas of science, technology and engineering.
In this talk, I will start by recalling some of the classical results for random walks and then discuss some of my own recent explorations in this area of research that has maintained relevance for decades. 

Bayesian Synthetic Likelihood 15:10 Fri 26 Oct, 2018 :: Napier 208 :: A/Prof Chris Drovandi :: Queensland University of Technology
Complex stochastic processes are of interest in many applied disciplines. However, the likelihood function associated with such models is often computationally intractable, prohibiting standard statistical inference frameworks for estimating model parameters based on data. Currently, the most popular simulationbased parameter estimation method is approximate Bayesian computation (ABC). Despite the widespread applicability and success of ABC, it has some limitations. This talk will describe an alternative approach, called Bayesian synthetic likelihood (BSL), which overcomes some limitations of ABC and can be much more effective in certain classes of applications. The talk will also describe various extensions to the standard BSL approach. This project has been a joint effort with several academic collaborators, postdocs and PhD students. 
News matching "Hidden Markov processes" 
Sam Cohen wins prize for best student talk at Aust MS 2009 Congratulations to Mr Sam Cohen, a PhD student within the School, who was awarded the B. H. Neumann Prize for the best student paper at the 2009 meeting of the Australian Mathematical Society for his talk on
Dynamic Risk Measures and Nonlinear Expectations with Markov Chain noise. Posted Tue 6 Oct 09. 
Publications matching "Hidden Markov processes"Publications 

On Markovmodulated exponentialaffine bond price formulae Elliott, Robert; Siu, T, Applied Mathematical Finance 16 (1–15) 2009  Discretetime expectation maximization algorithms for Markovmodulated poisson processes Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 53 (247–256) 2008  Pricing Options and Vriance Swaps in MarkovModulated Brownian Markets Elliott, Robert; Swishchuk, A, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 45–68, 2007  Smoothed Parameter Estimation for a Hidden Markov Model of Credit Quality Korolkiewicz, M; Elliott, Robert, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 69–90, 2007  The Term Structure of Interest Rates in a Hidden Markov Setting Elliott, Robert; Wilson, C, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 15–30, 2007  A Markov analysis of social learning and adaptation Wheeler, Scott; Bean, Nigel; Gaffney, Janice; Taylor, Peter, Journal of Evolutionary Economics 16 (299–319) 2006  A hidden Markov approach to the forward premium puzzle Elliott, Robert; Han, B, International Journal of Theoretical and Applied Finance 9 (1009–1020) 2006  Datarecursive smoother formulae for partially observed discretetime Markov chains Elliott, Robert; Malcolm, William, Stochastic Analysis and Applications 24 (579–597) 2006  Option pricing for GARCH models with Markov switching Elliott, Robert; Siu, T; Chan, L, International Journal of Theoretical and Applied Finance 9 (825–841) 2006  Option Pricing for Pure Jump Processes with Markov Switching Compensators Elliott, Robert, Finance and Stochastics 10 (250–275) 2006  Impulsive control of a sequence of rumour processes Pearce, Charles; Kaya, C; Belen, Selma, chapter in Continuous optimization: Current trends and modern applications (Springer) 387–407, 2005  New Gaussian mixture state estimation schemes for discrete time hybrid GaussMarkov systems Elliott, Robert; Dufour, F; Malcolm, William, The 2005 American Control Conference, Portland, OR, USA 08/06/05  Simulating catchmentscale monthly rainfall with classes of hidden Markov models Whiting, Julian; Thyer, M; Lambert, Martin; Metcalfe, Andrew, The 29th Hydrology and Water Resources Symposium, Rydges Lakeside, Canberra, Australia 20/02/05  General smoothing formulas for Markovmodulated Poisson observations Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 50 (1123–1134) 2005  Hidden Markov chain filtering for a jump diffusion model Wu, P; Elliott, Robert, Stochastic Analysis and Applications 23 (153–163) 2005  Hidden Markov filter estimation of the occurrence time of an event in a financial market Elliott, Robert; Tsoi, A, Stochastic Analysis and Applications 23 (1165–1177) 2005  Hitting probabilities and hitting times for stochastic fluid flows Bean, Nigel; O'Reilly, Malgorzata; Taylor, Peter, Stochastic Processes and their Applications 115 (1530–1556) 2005  Ramaswami's duality and probabilistic algorithms for determining the rate matrix for a structured GI/M/1 Markov chain Hunt, Emma, The ANZIAM Journal 46 (485–493) 2005  Risksensitive filtering and smoothing for continuoustime Markov processes Malcolm, William; Elliott, Robert; James, M, IEEE Transactions on Information Theory 51 (1731–1738) 2005  State and mode estimation for discretetime jump Markov systems Elliott, Robert; Dufour, F; Malcolm, William, Siam Journal on Control and Optimization 44 (1081–1104) 2005  A probabilistic algorithm for finding the rate matrix of a blockGI/M/1 Markov chain Hunt, Emma, The ANZIAM Journal 45 (457–475) 2004  Development of NonHomogeneous and Hierarchical Hidden Markov Models for Modelling Monthly Rainfall and Streamflow Time Series Whiting, Julian; Lambert, Martin; Metcalfe, Andrew; Kuczera, George, World Water and Environmental Resources Congress (2004), Salt Lake City, Utah, USA 27/06/04  Contribution of active membrane processes to conducted hyperpolarization in arterioles of hamster cheek pouch Crane, Glenis Jayne; Neild, T; Segal, S, Microcirculation 11 (425–433) 2004  Robust Mary detection filters and smoothers for continuoustime jump Markov systems Elliott, Robert; Malcolm, William, IEEE Transactions on Automatic Control 49 (1046–1055) 2004  Arborescences, matrixtrees and the accumulated sojourn time in a Markov process Pearce, Charles; Falzon, L, chapter in Stochastic analysis and applications Volume 3 (Nova Science Publishers) 147–168, 2003  Identification of probability distributions within hidden state models of rainfall Whiting, Julian; Lambert, Martin; Metcalfe, Andrew, 28th International Hydrology and Water Resources Symposium, Wollongong, NWS, Australia 10/11/03  A Probabilistic algorithm for determining the fundamental matrix of a block M/G/1 Markov chain Hunt, Emma, Mathematical and Computer Modelling 38 (1203–1209) 2003  A complete yield curve description of a Markov interest rate model Elliott, Robert; Mamon, R, International Journal of Theoretical and Applied Finance 6 (317–326) 2003  A nonparametric hidden Markov model for climate state identification Lambert, Martin; Whiting, Julian; Metcalfe, Andrew, Hydrology and Earth System Sciences 7 (652–667) 2003  Robust parameter estimation for asset price models with Markov modulated volatilities Elliott, Robert; Malcolm, William; Tsoi, A, Journal of Economic Dynamics & Control 27 (1391–1409) 2003  Rumours, epidemics, and processes of mass action: Synthesis and analysis Dickinson, Rowland; Pearce, Charles, Mathematical and Computer Modelling 38 (1157–1167) 2003  MAP/PH/1 queues with leveldependent feedback and their departure processes Green, David, MatrixAnalytic Methods: Theory and Applications, Adelaide, Australia 14/07/02  Bivariate stochastic modelling of ephemeral streamflow Cigizoglu, H; Adamson, Peter; Metcalfe, Andrew, Hydrological Processes 16 (1451–1465) 2002  Portfolio optimization, hidden Markov models, and technical analysis of P&Fcharts Elliott, Robert; Hinz, J, International Journal of Theoretical and Applied Finance 5 (385–399) 2002  Supporting maintenance strategies using Markov models AlHassan, K; Swailes, D; Chan, J; Metcalfe, Andrew, IMA Journal of Management Mathematics 13 (17–27) 2002  Truncation and augmentation of levelindependent QBD processes Latouche, Guy; Taylor, Peter, Stochastic Processes and their Applications 99 (53–80) 2002  Hidden Markov chain filtering for generalised Bessel processes Elliott, Robert; Platen, E, chapter in Stochastics in Finite and Infinite Dimensions  in honor of Gopinath Kallianpur (Birkhauser) 123–143, 2001  Robust Mary detection filters for continuoustime jump Markov systems Elliott, Robert; Malcolm, William, The 40th IEEE Conference on Decision and Control (CDC), Orlando, Florida 04/12/01  Robust smoother dynamics for Poisson processes driven by an It diffusion Elliott, Robert; Malcolm, William, The 40th IEEE Conference on Decision and Control (CDC), Orlando, Florida 04/12/01  On the existence of a quasistationary measure for a Markov chain Lasserre, J; Pearce, Charles, Annals of Probability 29 (437–446) 2001  Hidden state Markov chain time series models for arid zone hydrology Cigizoglu, K; Adamson, Peter; Lambert, Martin; Metcalfe, Andrew, International Symposium on Water Resources and Environmental Impact Assessment (2001), Istanbul, Turkey 11/07/01  Entropy, Markov information sources and Parrondo games Pearce, Charles, UPoN'99: Second International Conference, Adelaide, Australia 12/07/99  Lag correlations of approximating departure processes for MAP/PH/1 queues Green, David, 3rd International Conference on Matrix Analytic Methods, Leuven, Belgium 01/07/00  Levelphase independence for GI/M/1type markov chains Latouche, Guy; Taylor, Peter, Journal of Applied Probability 37 (984–998) 2000  Quasistationary distributions for leveldependent quasibirthanddeath processes Bean, Nigel; Pollett, P; Taylor, Peter, Stochastic Models 16 (511–541) 2000 
Advanced search options
You may be able to improve your search results by using the following syntax:
Query  Matches the following 

Asymptotic Equation  Anything with "Asymptotic" or "Equation". 
+Asymptotic +Equation  Anything with "Asymptotic" and "Equation". 
+Stokes "NavierStokes"  Anything containing "Stokes" but not "NavierStokes". 
Dynam*  Anything containing "Dynamic", "Dynamical", "Dynamicist" etc. 
