The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
May 2019
MTWTFSS
  12345
6789101112
13141516171819
20212223242526
2728293031  
       

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

People matching "Influences on lobster (Jasus edwardsii) catch rate"

Professor Nigel Bean
Chair of Applied Mathematics


More about Nigel Bean...
Professor Robert Elliott
Adjunct Professor


More about Robert Elliott...
Dr David Green
Lecturer in Applied Mathematics


More about David Green...
Professor Tony Roberts
Professor of Applied Mathematics


More about Tony Roberts...
Associate Professor Joshua Ross
Senior Lecturer in Applied Mathematics


More about Joshua Ross...
Professor Matthew Roughan
Professor of Applied Mathematics


More about Matthew Roughan...

Courses matching "Influences on lobster (Jasus edwardsii) catch rate"

Financial Modelling: Tools and Techniques

The growth of the range of financial products that are traded on financial markets or are available at other financial institutions, is a notable feature of the finance industry. A major factor contributing to this growth has been the development of sophisticated methods to price these products. The significance to the finance industry of developing a method for pricing options (financial derivatives) was recognized by the awarding of the Nobel Prize in Economics to Myron Scholes and Robert Merton in 1997. The mathematics upon which their method is built is stochastic calculus in continuous time. Binomial lattice type models provide another approach for pricing options. These models are formulated in discrete time and the examination of their structure and application in various financial settings takes place in a mathematical context that is less technically demanding than when time is continuous. This course discusses the binomial framework, shows how discrete-time models currently used in the financial industry are formulated within this framework and uses the models to compute prices and construct hedges to manage financial risk. Spreadsheets are used to facilitate computations where appropriate. Topics covered are: The no-arbitrage assumption for financial markets; no-arbitrage inequalities; formulation of the one-step binomial model; basic pricing formula; the Cox-Ross-Rubinstein (CRR) model; application to European style options, exchange rates and interest rates; formulation of the n-step binomial model; backward induction formula; forward induction formula; n-step CRR model; relationship to Black-Scholes; forward and future contracts; exotic options; path dependent options; implied volatility trees; implied binomial trees; interest rate models; hedging; real options; implementing the models using EXCEL spreadsheets.

More about this course...

Modelling and Simulation of Stochastic Systems

The course provides students with the skills to analyse and design systems using modelling and simulation techniques. Case studies will be undertaken involving hands-on use of simulation packages. The application of simulation in areas such as manufacturing, telecommunications and transport will be investigated. At the end of this course, students will be capable of identifying practical situations where simulation modelling can be helpful, reporting to management on how they would undertake such a project, collecting relevant data, building and validating a model, analysing the output and reporting their findings to management. Students complete a project in groups of two or three, write a concise summary of what they have done and report their findings to the class. The project report at the end of this course should be a substantial document that is a record of a student's practical ability in simulation modelling, which can also become part of a portfolio or CV. Topics covered are: Introduction to simulation, hand simulation, introduction to a simulation package, review of basic probabilty theory, introduction to random number generation, generation of random variates, anaylsis of simulation output, variance reduction techniques and basic analytic queeing models.

More about this course...

Statistical Analysis and Modelling 1

This is a first course in Statistics for mathematically inclined students. It will address the key principles underlying commonly used statistical methods such as confidence intervals, hypothesis tests, inference for means and proportions, and linear regression. It will develop a deeper mathematical understanding of these ideas, many of which will be familiar from studies in secondary school. The application of basic and more advanced statistical methods will be illustrated on a range of problems from areas such as medicine, science, technology, government, commerce and manufacturing. The use of the statistical package SPSS will be developed through a sequence of computer practicals. Topics covered will include: basic probability and random variables, fundamental distributions, inference for means and proportions, comparison of independent and paired samples, simple linear regression, diagnostics and model checking, multiple linear regression, simple factorial models, models with factors and continuous predictors.

More about this course...

Statistical Modelling and Inference

Statistical methods are important to all areas that rely on data including science, technology, government and commerce. To deal with the complex problems that arise in practice requires a sound understanding of fundamental statistical principles together with a range of suitable modelling techniques. Computing using a high level statistical package is also an essential element of modern statistical practice. This course provides an introduction to the principles of statistical inference and the development of linear statistical models with the statistical package R. Topics covered are: Point estimates, unbiasedness, mean-squared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and two-sample procedures; simple linear regression, regression diagnostics, prediction; linear models, ANOVA, multiple regression, factorial experiments, analysis of covariance models, model building; likelihood based methods for estimation and testing, goodness of fit tests; sample surveys, population means, totals and proportions, simple random samples, stratified random samples. Topics covered are: point estimates, unbiasedness, mean-squared error, confidence intervals, tests of hypotheses, power calculations, derivation of one and two-sample procedures: simple linear regression, regression diagnostics, prediction: linear models, analysis of variance (ANOVA), multiple regression, factorial experiments, analysis of covariance models, model building; likelihood-based methods for estimation and testing and goodness-of-fit tests.

More about this course...

Statistical Modelling III

One of the key requirements of an applied statistician is the ability to formulate appropriate statistical models and then apply them to data in order to answer the questions of interest. Most often, such models can be seen as relating a response variable to one or more explanatory variables. For example, in a medical experiment we may seek to evaluate a new treatment by relating patient outcome to treatment received while allowing for background variables such as age, sex and disease severity. In this course, a rigorous discussion of the linear model is given and various extensions are developed. There is a strong practical emphasis and the statistical package R is used extensively. Topics covered are: the linear model, least squares estimation, generalised least squares estimation, properties of estimators, the Gauss-Markov theorem; geometry of least squares, subspace formulation of linear models, orthogonal projections; regression models, factorial experiments, analysis of covariance and model formulae; regression diagnostics, residuals, influence diagnostics, transformations, Box-Cox models, model selection and model building strategies; models with complex error structure, split-plot experiments; logistic regression models.

More about this course...

Events matching "Influences on lobster (Jasus edwardsii) catch rate"

Maths and Movie Making
15:10 Fri 13 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: Dr Michael Anderson

Mathematics underlies many of the techniques used in modern movie making. This talk will sketch out the movie visual effects pipeline, discussing how mathematics is used in the various stages and detailing some of the mathematical areas that are still being actively researched.
The talk will finish with an overview of the type of work the speaker is involved in, the steps that led him there and the opportunities for mathematicians in this new and exciting area.
Mathematical modelling of multidimensional tissue growth
16:10 Tue 24 Oct, 2006 :: Benham Lecture Theatre :: Prof John King

Some simple continuum-mechanics-based models for the growth of biological tissue will be formulated and their properties (particularly with regard to stability) described.
A Bivariate Zero-inflated Poisson Regression Model and application to some Dental Epidemiological data
14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul

Data in the form of paired (pre-treatment, post-treatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zero-inflated bivariate Poisson regression (ZIBPR) model for the paired (pre-treatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zero-inflated Poisson regression (ZIPR) model of the post-treatment count with the pre-treatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zero-inflated Poisson regression model in which the pre-treatment DMFT index is taken to be a covariate
Modelling gene networks: the case of the quorum sensing network in bacteria.
15:10 Fri 1 Jun, 2007 :: G08 Mathematics Building University of Adelaide :: Dr Adrian Koerber

The quorum sensing regulatory gene-network is employed by bacteria to provide a measure of their population-density and switch their behaviour accordingly. I will present an overview of quorum sensing in bacteria together with some of the modelling approaches I\'ve taken to describe this system. I will also discuss how this system relates to virulence and medical treatment, and the insights gained from the mathematics.
Global and Local stationary modelling in finance: Theory and empirical evidence
14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 Pantheon-Sorbonne

To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.

Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.

Now non-stationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This non-stationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).

Thus, using stationary unconditional moments suggest a global stationarity for the model, but using non-stationary unconditional moments or non-stationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.

The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.

1. What kinds of non-stationarity affect the major financial and economic data sets? How to detect them?

2. Local and global stationarities: How are they defined?

3. What is the impact of evidence of non-stationarity on the statistics computed from the global non stationary data sets?

4. How can we analyze data sets in the non-stationary global framework? Does the asymptotic theory work in non-stationary framework?

5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?

These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.

Betti's Reciprocal Theorem for Inclusion and Contact Problems
15:10 Fri 1 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University

Enrico Betti (1823-1892) is recognized in the mathematics community for his pioneering contributions to topology. An equally important contribution is his formulation of the reciprocity theorem applicable to elastic bodies that satisfy the classical equations of linear elasticity. Although James Clerk Maxwell (1831-1879) proposed a law of reciprocal displacements and rotations in 1864, the contribution of Betti is acknowledged for its underlying formal mathematical basis and generality. The purpose of this lecture is to illustrate how Betti's reciprocal theorem can be used to full advantage to develop compact analytical results for certain contact and inclusion problems in the classical theory of elasticity. Inclusion problems are encountered in number of areas in applied mechanics ranging from composite materials to geomechanics. In composite materials, the inclusion represents an inhomogeneity that is introduced to increase either the strength or the deformability characteristics of resulting material. In geomechanics, the inclusion represents a constructed material region, such as a ground anchor, that is introduced to provide load transfer from structural systems. Similarly, contact problems have applications to the modelling of the behaviour of indentors used in materials testing to the study of foundations used to distribute loads transmitted from structures. In the study of conventional problems the inclusions and the contact regions are directly loaded and this makes their analysis quite straightforward. When the interaction is induced by loads that are placed exterior to the indentor or inclusion, the direct analysis of the problem becomes inordinately complicated both in terns of formulation of the integral equations and their numerical solution. It is shown by a set of selected examples that the application of Betti's reciprocal theorem leads to the development of exact closed form solutions to what would otherwise be approximate solutions achievable only through the numerical solution of a set of coupled integral equations.
The Role of Walls in Chaotic Mixing
15:10 Fri 22 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Jean-Luc Thiffeault :: Department of Mathematics, University of Wisconsin - Madison

I will report on experiments of chaotic mixing in closed and open vessels, in which a highly viscous fluid is stirred by a moving rod. In these experiments we analyze quantitatively how the concentration field of a low-diffusivity dye relaxes towards homogeneity, and observe a slow algebraic decay, at odds with the exponential decay predicted by most previous studies. Visual observations reveal the dominant role of the vessel wall, which strongly influences the concentration field in the entire domain and causes the anomalous scaling. A simplified 1-D model supports our experimental results. Quantitative analysis of the concentration pattern leads to scalings for the distributions and the variance of the concentration field consistent with experimental and numerical results. I also discuss possible ways of avoiding the limiting role of walls.

This is joint work with Emmanuelle Gouillart, Olivier Dauchot, and Stephane Roux.

Probabilistic models of human cognition
15:10 Fri 29 Aug, 2008 :: G03 Napier Building University of Adelaide :: Dr Daniel Navarro :: School of Psychology, University of Adelaide

Over the last 15 years a fairly substantial psychological literature has developed in which human reasoning and decision-making is viewed as the solution to a variety of statistical problems posed by the environments in which we operate. In this talk, I briefly outline the general approach to cognitive modelling that is adopted in this literature, which relies heavily on Bayesian statistics, and introduce a little of the current research in this field. In particular, I will discuss work by myself and others on the statistical basis of how people make simple inductive leaps and generalisations, and the links between these generalisations and how people acquire word meanings and learn new concepts. If time permits, the extensions of the work in which complex concepts may be characterised with the aid of nonparametric Bayesian tools such as Dirichlet processes will be briefly mentioned.
Mathematical modelling of blood flow in curved arteries
15:10 Fri 12 Sep, 2008 :: G03 Napier Building University of Adelaide :: Dr Jennifer Siggers :: Imperial College London

Atherosclerosis, characterised by plaques, is the most common arterial disease. Plaques tend to develop in regions of low mean wall shear stress, and regions where the wall shear stress changes direction during the course of the cardiac cycle. To investigate the effect of the arterial geometry and driving pressure gradient on the wall shear stress distribution we consider an idealised model of a curved artery with uniform curvature. We assume that the flow is fully-developed and seek solutions of the governing equations, finding the effect of the parameters on the flow and wall shear stress distribution. Most previous work assumes the curvature ratio is asymptotically small; however, many arteries have significant curvature (e.g. the aortic arch has curvature ratio approx 0.25), and in this work we consider in particular the effect of finite curvature.

We present an extensive analysis of curved-pipe flow driven by a steady and unsteady pressure gradients. Increasing the curvature causes the shear stress on the inside of the bend to rise, indicating that the risk of plaque development would be overestimated by considering only the weak curvature limit.

The Mechanics of Nanoscale Devices
15:10 Fri 10 Oct, 2008 :: G03 Napier Building University of Adelaide :: Associate Prof. John Sader :: Department of Mathematics and Statistics, The University of Melbourne

Nanomechanical sensors are often used to measure environmental changes with extreme sensitivity. Controlling the effects of surfaces and fluid dissipation presents significant challenges to achieving the ultimate sensitivity in these devices. In this talk, I will give an overview of theoretical/experimental work we are undertaking to explore the underlying physical processes in these systems. The talk will be general and aimed at introducing some recent developments in the field of nanomechanical sensors.
Assisted reproduction technology: how maths can contribute
13:10 Wed 22 Oct, 2008 :: Napier 210 :: Dr Yvonne Stokes

Media...
Most people will have heard of IVF (in vitro fertilisation), a technology for helping infertile couples have a baby. Although there are many IVF babies, many will also know that the success rate is still low for the cost and inconvenience involved. The fact that some women cannot make use of IVF because of life-threatening consequences is less well known but motivates research into other technologies, including IVM (in vitro maturation). What has all this to do with maths? Come along and find out how mathematical modelling is contributing to understanding and improvement in this important and interesting field.
Oceanographic Research at the South Australian Research and Development Institute: opportunities for collaborative research
15:10 Fri 21 Nov, 2008 :: Napier G04 :: Associate Prof John Middleton :: South Australian Research and Development Institute

Increasing threats to S.A.'s fisheries and marine environment have underlined the increasing need for soundly based research into the ocean circulation and ecosystems (phyto/zooplankton) of the shelf and gulfs. With support of Marine Innovation SA, the Oceanography Program has within 2 years, grown to include 6 FTEs and a budget of over $4.8M. The program currently leads two major research projects, both of which involve numerical and applied mathematical modelling of oceanic flow and ecosystems as well as statistical techniques for the analysis of data. The first is the implementation of the Southern Australian Integrated Marine Observing System (SAIMOS) that is providing data to understand the dynamics of shelf boundary currents, monitor for climate change and understand the phyto/zooplankton ecosystems that under-pin SA's wild fisheries and aquaculture. SAIMOS involves the use of ship-based sampling, the deployment of underwater marine moorings, underwater gliders, HF Ocean RADAR, acoustic tracking of tagged fish and Autonomous Underwater vehicles.

The second major project involves measuring and modelling the ocean circulation and biological systems within Spencer Gulf and the impact on prawn larval dispersal and on the sustainability of existing and proposed aquaculture sites. The discussion will focus on opportunities for collaborative research with both faculty and students in this exciting growth area of S.A. science.

Sloshing in tanks of liquefied natural gas (LNG) vessels
15:10 Wed 22 Apr, 2009 :: Napier LG29 :: Prof. Frederic Dias :: ENS, Cachan

The last scientific conversation I had with Ernie Tuck was on liquid impact. As a matter of fact, we discussed the paper by J.H. Milgram, Journal of Fluid Mechanics 37 (1969), entitled "The motion of a fluid in a cylindrical container with a free surface following vertical impact." Liquid impact is a key issue in sloshing and in particular in sloshing in tanks of LNG vessels. Numerical simulations of sloshing have been performed by various groups, using various types of numerical methods. In terms of the numerical results, the outcome is often impressive, but the question remains of how relevant these results are when it comes to determining impact pressures. The numerical models are too simplified to reproduce the high variability of the measured pressures. In fact, for the time being, it is not possible to simulate accurately both global and local effects. Unfortunately it appears that local effects predominate over global effects when the behaviour of pressures is considered. Having said this, it is important to point out that numerical studies can be quite useful to perform sensitivity analyses in idealized conditions such as a liquid mass falling under gravity on top of a horizontal wall and then spreading along the lateral sides. Simple analytical models inspired by numerical results on idealized problems can also be useful to predict trends. The talk is organized as follows: After a brief introduction on the sloshing problem and on scaling laws, it will be explained to what extent numerical studies can be used to improve our understanding of impact pressures. Results on a liquid mass hitting a wall obtained by a finite-volume code with interface reconstruction as well as results obtained by a simple analytical model will be shown to reproduce the trends of experiments on sloshing. This is joint work with L. Brosset (GazTransport & Technigaz), J.-M. Ghidaglia (ENS Cachan) and J.-P. Braeunig (INRIA).
Nonlinear diffusion-driven flow in a stratified viscous fluid
15:00 Fri 26 Jun, 2009 :: Macbeth Lecture Theatre :: Associate Prof Michael Page :: Monash University

In 1970, two independent studies (by Wunsch and Phillips) of the behaviour of a linear density-stratified viscous fluid in a closed container demonstrated a slow flow can be generated simply due to the container having a sloping boundary surface This remarkable motion is generated as a result of the curvature of the lines of constant density near any sloping surface, which in turn enables a zero normal-flux condition on the density to be satisfied along that boundary. When the Rayleigh number is large (or equivalently Wunsch's parameter $R$ is small) this motion is concentrated in the near vicinity of the sloping surface, in a thin `buoyancy layer' that has many similarities to an Ekman layer in a rotating fluid.

A number of studies have since considered the consequences of this type of `diffusively-driven' flow in a semi-infinite domain, including in the deep ocean and with turbulent effects included. More recently, Page & Johnson (2008) described a steady linear theory for the broader-scale mass recirculation in a closed container and demonstrated that, unlike in previous studies, it is possible for the buoyancy layer to entrain fluid from that recirculation. That work has since been extended (Page & Johnson, 2009) to the nonlinear regime of the problem and some of the similarities to and differences from the linear case will be described in this talk. Simple and elegant analytical solutions in the limit as $R \to 0$ still exist in some situations, and they will be compared with numerical simulations in a tilted square container at small values of $R$. Further work on both the unsteady flow properties and the flow for other geometrical configurations will also be described.

Modelling fluid-structure interactions in micro-devices
15:00 Thu 3 Sep, 2009 :: School Board Room :: Dr Richard Clarke :: University of Auckland

The flows generated in many modern micro-devices possess very little convective inertia, however, they can be highly unsteady and exert substantial hydrodynamic forces on the device components. Typically these components exhibit some degree of compliance, which traditionally has been treated using simple one-dimensional elastic beam models. However, recent findings have suggested that three-dimensional effects can be important and, accordingly, we consider the elastohydrodynamic response of a rapidly oscillating three-dimensional elastic plate that is immersed in a viscous fluid. In addition, a preliminary model will be presented which incorporates the presence of a nearby elastic wall.
Modelling and pricing for portfolio credit derivatives
15:10 Fri 16 Oct, 2009 :: MacBeth Lecture Theatre :: Dr Ben Hambly :: University of Oxford

The current financial crisis has been in part precipitated by the growth of complex credit derivatives and their mispricing. This talk will discuss some of the background to the `credit crunch', as well as the models and methods used currently. We will then develop an alternative view of large basket credit derivatives, as functions of a stochastic partial differential equation, which addresses some of the shortcomings.
Eigen-analysis of fluid-loaded compliant panels
15:10 Wed 9 Dec, 2009 :: Santos Lecture Theatre :: Prof Tony Lucey :: Curtin University of Technology

This presentation concerns the fluid-structure interaction (FSI) that occurs between a fluid flow and an arbitrarily deforming flexible boundary considered to be a flexible panel or a compliant coating that comprises the wetted surface of a marine vehicle. We develop and deploy an approach that is a hybrid of computational and theoretical techniques. The system studied is two-dimensional and linearised disturbances are assumed. Of particular novelty in the present work is the ability of our methods to extract a full set of fluid-structure eigenmodes for systems that have strong spatial inhomogeneity in the structure of the flexible wall.

We first present the approach and some results of the system in which an ideal, zero-pressure gradient, flow interacts with a flexible plate held at both its ends. We use a combination of boundary-element and finite-difference methods to express the FSI system as a single matrix equation in the interfacial variable. This is then couched in state-space form and standard methods used to extract the system eigenvalues. It is then shown how the incorporation of spatial inhomogeneity in the stiffness of the plate can be either stabilising or destabilising. We also show that adding a further restraint within the streamwise extent of a homogeneous panel can trigger an additional type of hydroelastic instability at low flow speeds. The mechanism for the fluid-to-structure energy transfer that underpins this instability can be explained in terms of the pressure-signal phase relative to that of the wall motion and the effect on this relationship of the added wall restraint.

We then show how the ideal-flow approach can be conceptually extended to include boundary-layer effects. The flow field is now modelled by the continuity equation and the linearised perturbation momentum equation written in velocity-velocity form. The near-wall flow field is spatially discretised into rectangular elements on an Eulerian grid and a variant of the discrete-vortex method is applied. The entire fluid-structure system can again be assembled as a linear system for a single set of unknowns - the flow-field vorticity and the wall displacements - that admits the extraction of eigenvalues. We then show how stability diagrams for the fully-coupled finite flow-structure system can be assembled, in doing so identifying classes of wall-based or fluid-based and spatio-temporal wave behaviour.

Modelling of the Human Skin Equivalent
15:10 Fri 26 Mar, 2010 :: Napier 102 :: Prof Graeme Pettet :: Queensland University of Technology

A brief overview will be given of the development of a so called Human Skin Equivalent Construct. This laboratory grown construct can be used for studying growth, response and the repair of human skin subjected to wounding and/or treatment under strictly regulated conditions. Details will also be provided of a series of mathematical models we have developed that describe the dynamics of the Human Skin Equivalent Construct, which can be used to assist in the development of the experimental protocol, and to provide insight into the fundamental processes at play in the growth and development of the epidermis in both healthy and diseased states.
The fluid mechanics of gels used in tissue engineering
15:10 Fri 9 Apr, 2010 :: Santos Lecture Theatre :: Dr Edward Green :: University of Western Australia

Tissue engineering could be called 'the science of spare parts'. Although currently in its infancy, its long-term aim is to grow functional tissues and organs in vitro to replace those which have become defective through age, trauma or disease. Recent experiments have shown that mechanical interactions between cells and the materials in which they are grown have an important influence on tissue architecture, but in order to understand these effects, we first need to understand the mechanics of the gels themselves.

Many biological gels (e.g. collagen) used in tissue engineering have a fibrous microstructure which affects the way forces are transmitted through the material, and which in turn affects cell migration and other behaviours. I will present a simple continuum model of gel mechanics, based on treating the gel as a transversely isotropic viscous material. Two canonical problems are considered involving thin two-dimensional films: extensional flow, and squeezing flow of the fluid between two rigid plates. Neglecting inertia, gravity and surface tension, in each regime we can exploit the thin geometry to obtain a leading-order problem which is sufficiently tractable to allow the use of analytical methods. I discuss how these results could be exploited practically to determine the mechanical properties of real gels. If time permits, I will also talk about work currently in progress which explores the interaction between gel mechanics and cell behaviour.

Meteorological drivers of extreme bushfire events in southern Australia
15:10 Fri 2 Jul, 2010 :: Benham Lecture Theatre :: Prof Graham Mills :: Centre for Australian Weather and Climate Research, Melbourne

Bushfires occur regularly during summer in southern Australia, but only a few of these fires become iconic due to their effects, either in terms of loss of life or economic and social cost. Such events include Black Friday (1939), the Hobart fires (1967), Ash Wednesday (1983), the Canberra bushfires (2003), and most recently Black Saturday in February 2009. In most of these events the weather of the day was statistically extreme in terms of heat, (low) humidity, and wind speed, and in terms of antecedent drought. There are a number of reasons for conducting post-event analyses of the meteorology of these events. One is to identify any meteorological circulation systems or dynamic processes occurring on those days that might not be widely or hitherto recognised, to document these, and to develop new forecast or guidance products. The understanding and prediction of such features can be used in the short term to assist in effective management of fires and the safety of firefighters and in the medium range to assist preparedness for the onset of extreme conditions. The results of such studies can also be applied to simulations of future climates to assess the likely changes in frequency of the most extreme fire weather events, and their documentary records provide a resource that can be used for advanced training purposes. In addition, particularly for events further in the past, revisiting these events using reanalysis data sets and contemporary NWP models can also provide insights unavailable at the time of the events. Over the past few years the Bushfire CRC's Fire Weather and Fire Danger project in CAWCR has studied the mesoscale meteorology of a number of major fire events, including the days of Ash Wednesday 1983, the Dandenong Ranges fire in January 1997, the Canberra fires and the Alpine breakout fires in January 2003, the Lower Eyre Peninsula fires in January 2005 and the Boorabbin fire in December 2007-January 2008. Various aspects of these studies are described below, including the structures of dry cold frontal wind changes, the particular character of the cold fronts associated with the most damaging fires in southeastern Australia, and some aspects of how the vertical temperature and humidity structure of the atmosphere may affect the fire weather at the surface. These studies reveal much about these major events, but also suggest future research directions, and some of these will be discussed.
A polyhedral model for boron nitride nanotubes
15:10 Fri 3 Sep, 2010 :: Napier G04 :: Dr Barry Cox :: University of Adelaide

The conventional rolled-up model of nanotubes does not apply to the very small radii tubes, for which curvature effects become significant. In this talk an existing geometric model for carbon nanotubes proposed by the authors, which accommodates this deficiency and which is based on the exact polyhedral cylindrical structure, is extended to a nanotube structure involving two species of atoms in equal proportion, and in particular boron nitride nanotubes. This generalisation allows the principle features to be included as the fundamental assumptions of the model, such as equal bond length but distinct bond angles and radii between the two species. The polyhedral model is based on the five simple geometric assumptions: (i) all bonds are of equal length, (ii) all bond angles for the boron atoms are equal, (iii) all boron atoms lie at an equal distance from the nanotube axis, (iv) all nitrogen atoms lie at an equal distance from the nanotube axis, and (v) there exists a fixed ratio of pyramidal height H, between the boron species compared with the corresponding height in a symmetric single species nanotube. Working from these postulates, expressions are derived for the various structural parameters such as radii and bond angles for the two species for specific values of the chiral vector numbers (n,m). The new model incorporates an additional constant of proportionality H, which we assume applies to all nanotubes comprising the same elements and is such that H = 1 for a single species nanotube. Comparison with `ab initio' studies suggest that this assumption is entirely reasonable, and in particular we determine the value H = 0.56\pm0.04 for boron nitride, based on computational results in the literature. This talk relates to work which is a couple of years old and given time at the end we will discuss some newer results in geometric models developed with our former student Richard Lee (now also at the University of Adelaide as a post doc) and some work-in-progress on carbon nanocones. Note: pyramidal height is our own terminology and will be explained in the talk.
Hugs not drugs
15:10 Mon 20 Sep, 2010 :: Ingkarni Wardli B17 :: Dr Scott McCue :: Queensland University of Technology

I will discuss a model for drug diffusion that involves a Stefan problem with a "kinetic undercooling". I like Stefan problems, so I like this model. I like drugs too, but only legal ones of course. Anyway, it turns out that in some parameter regimes, this sophisticated moving boundary problem hardly works better than a simple linear undergraduate model (there's a lesson here for mathematical modelling). On the other hand, for certain polymer capsules, the results are interesting and suggest new means for controlled drug delivery. If time permits, I may discuss certain asymptotic limits that are of interest from a Stefan problem perspective. Finally, I won't bring any drugs with me to the seminar, but I'm willing to provide hugs if necessary.
Statistical physics and behavioral adaptation to Creation's main stimuli: sex and food
15:10 Fri 29 Oct, 2010 :: E10 B17 Suite 1 :: Prof Laurent Seuront :: Flinders University and South Australian Research and Development Institute

Animals typically search for food and mates, while avoiding predators. This is particularly critical for keystone organisms such as intertidal gastropods and copepods (i.e. millimeter-scale crustaceans) as they typically rely on non-visual senses for detecting, identifying and locating mates in their two- and three-dimensional environments. Here, using stochastic methods derived from the field of nonlinear physics, we provide new insights into the nature (i.e. innate vs. acquired) of the motion behavior of gastropods and copepods, and demonstrate how changes in their behavioral properties can be used to identify the trade-offs between foraging for food or sex. The gastropod Littorina littorea hence moves according to fractional Brownian motions while foraging for food (in accordance with the fractal nature of food distributions), and switch to Brownian motion while foraging for sex. In contrast, the swimming behavior of the copepod Temora longicornis belongs to the class of multifractal random walks (MRW; i.e. a form of anomalous diffusion), characterized by a nonlinear moment scaling function for distance versus time. This clearly differs from the traditional Brownian and fractional Brownian walks expected or previously detected in animal behaviors. The divergence between MRW and Levy flight and walk is also discussed, and it is shown how copepod anomalous diffusion is enhanced by the presence and concentration of conspecific water-borne signals, and is dramatically increasing male-female encounter rates.
Arbitrage bounds for weighted variance swap prices
15:05 Fri 3 Dec, 2010 :: Napier LG28 :: Prof Mark Davis :: Imperial College London

This paper builds on earlier work by Davis and Hobson (Mathematical Finance, 2007) giving model-free---except for a 'frictionless markets' assumption--- necessary and sufficient conditions for absence of arbitrage given a set of current-time put and call options on some underlying asset. Here we suppose that the prices of a set of put options, all maturing at the same time, are given and satisfy the conditions for consistency with absence of arbitrage. We now add a path-dependent option, specifically a weighted variance swap, to the set of traded assets and ask what are the conditions on its time-0 price under which consistency with absence of arbitrage is maintained. In the present work, we work under the extra modelling assumption that the underlying asset price process has continuous paths. In general, we find that there is always a non- trivial lower bound to the range of arbitrage-free prices, but only in the case of a corridor swap do we obtain a finite upper bound. In the case of, say, the vanilla variance swap, a finite upper bound exists when there are additional traded European options which constrain the left wing of the volatility surface in appropriate ways.
Queues with skill based routing under FCFS–ALIS regime
15:10 Fri 11 Feb, 2011 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, Israel

We consider a system where jobs of several types are served by servers of several types, and a bipartite graph between server types and job types describes feasible assignments. This is a common situation in manufacturing, call centers with skill based routing, matching of parent-child in adoption or matching in kidney transplants etc. We consider the case of first come first served policy: jobs are assigned to the first available feasible server in order of their arrivals. We consider two types of policies for assigning customers to idle servers - a random assignment and assignment to the longest idle server (ALIS) We survey some results for four different situations:

  • For a loss system we find conditions for reversibility and insensitivity.
  • For a manufacturing type system, in which there is enough capacity to serve all jobs, we discuss a product form solution and waiting times.
  • For an infinite matching model in which an infinite sequence of customers of IID types, and infinite sequence of servers of IID types are matched according to first come first, we obtain a product form stationary distribution for this system, which we use to calculate matching rates.
  • For a call center model with overload and abandonments we make some plausible observations.

This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and Ward Whitt.

Mathematical modelling in nanotechnology
15:10 Fri 4 Mar, 2011 :: 7.15 Ingkarni Wardli :: Prof Jim Hill :: University of Adelaide

Media...
In this talk we present an overview of the mathematical modelling contributions of the Nanomechanics Groups at the Universities of Adelaide and Wollongong. Fullerenes and carbon nanotubes have unique properties, such as low weight, high strength, flexibility, high thermal conductivity and chemical stability, and they have many potential applications in nano-devices. In this talk we first present some new results on the geometric structure of carbon nanotubes and on related nanostructures. One concept that has attracted much attention is the creation of nano-oscillators, to produce frequencies in the gigahertz range, for applications such as ultra-fast optical filters and nano-antennae. The sliding of an inner shell inside an outer shell of a multi-walled carbon nanotube can generate oscillatory frequencies up to several gigahertz, and the shorter the inner tube the higher the frequency. A C60-nanotube oscillator generates high frequencies by oscillating a C60 fullerene inside a single-walled carbon nanotube. Here we discuss the underlying mechanisms of nano-oscillators and using the Lennard-Jones potential together with the continuum approach, to mathematically model the C60-nanotube nano-oscillator. Finally, three illustrative examples of recent modelling in hydrogen storage, nanomedicine and nanocomputing are discussed.
Modelling of Hydrological Persistence in the Murray-Darling Basin for the Management of Weirs
12:10 Mon 4 Apr, 2011 :: 5.57 Ingkarni Wardli :: Aiden Fisher :: University of Adelaide

The lakes and weirs along the lower Murray River in Australia are aggregated and considered as a sequence of five reservoirs. A seasonal Markov chain model for the system will be implemented, and a stochastic dynamic program will be used to find optimal release strategies, in terms of expected monetary value (EMV), for the competing demands on the water resource given the stochastic nature of inflows. Matrix analytic methods will be used to analyse the system further, and in particular enable the full distribution of first passage times between any groups of states to be calculated. The full distribution of first passage times can be used to provide a measure of the risk associated with optimum EMV strategies, such as conditional value at risk (CVaR). The sensitivity of the model, and risk, to changing rainfall scenarios will be investigated. The effect of decreasing the level of discretisation of the reservoirs will be explored. Also, the use of matrix analytic methods facilitates the use of hidden states to allow for hydrological persistence in the inflows. Evidence for hydrological persistence of inflows to the lower Murray system, and the effect of making allowance for this, will be discussed.
On parameter estimation in population models
15:10 Fri 6 May, 2011 :: 715 Ingkarni Wardli :: Dr Joshua Ross :: The University of Adelaide

Essential to applying a mathematical model to a real-world application is calibrating the model to data. Methods for calibrating population models often become computationally infeasible when the populations size (more generally the size of the state space) becomes large, or other complexities such as time-dependent transition rates, or sampling error, are present. Here we will discuss the use of diffusion approximations to perform estimation in several scenarios, with successively reduced assumptions: (i) under the assumption of stationarity (the process had been evolving for a very long time with constant parameter values); (ii) transient dynamics (the assumption of stationarity is invalid, and thus only constant parameter values may be assumed); and, (iii) time-inhomogeneous chains (the parameters may vary with time) and accounting for observation error (a sample of the true state is observed).
Statistical modelling in economic forecasting: semi-parametrically spatio-temporal approach
12:10 Mon 23 May, 2011 :: 5.57 Ingkarni Wardli :: Dawlah Alsulami :: University of Adelaide

How to model spatio-temporal variation of housing prices is an important and challenging problem as it is of vital importance for both investors and policy makersto assess any movement in housing prices. In this seminar I will talk about the proposed model to estimate any movement in housing prices and measure the risk more accurately.
Optimal experimental design for stochastic population models
15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.
Priority queueing systems with random switchover times and generalisations of the Kendall-Takacs equation
16:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

In this talk I will review existing analytical results for priority queueing systems with Poisson incoming flows, general service times and a single server which needs some (random) time to switch between requests of different priority. Specifically, I will discuss analytical results for the busy period and workload of such systems with a special structure of switchover times. The results related to the busy period can be seen as generalisations of the famous Kendall-Tak\'{a}cs functional equation for $M|G|1$: being formulated in terms of Laplace-Stieltjes transform, they represent systems of functional recurrent equations. I will present a methodology and algorithms of their numerical solution; the efficiency of these algorithms is achieved by acceleration of the numerical procedure of solving the classical Kendall-Tak\'{a}cs equation. At the end I will identify open problems with regard to such systems; these open problems are mainly related to the modelling of switchover times.
Modelling computer network topologies through optimisation
12:10 Mon 1 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Rhys Bowden :: University of Adelaide

The core of the Internet is made up of many different computers (called routers) in many different interconnected networks, owned and operated by many different organisations. A popular and important field of study in the past has been "network topology": for instance, understanding which routers are connected to which other routers, or which networks are connected to which other networks; that is, studying and modelling the connection structure of the Internet. Previous study in this area has been plagued by unreliable or flawed experimental data and debate over appropriate models to use. The Internet Topology Zoo is a new source of network data created from the information that network operators make public. In order to better understand this body of network information we would like the ability to randomly generate network topologies resembling those in the zoo. Leveraging previous wisdom on networks produced as a result of optimisation processes, we propose a simple objective function based on possible economic constraints. By changing the relative costs in the objective function we can change the form of the resulting networks, and we compare these optimised networks to a variety of networks found in the Internet Topology Zoo.
Comparing Einstein to Newton via the post-Newtonian expansions
15:10 Fri 19 Aug, 2011 :: 7.15 Ingkarni Wardli :: Dr Todd Oliynyk :: Monash University

Media...
Einstein's general relativity is presently the most accurate theory of gravity. To completely determine the gravitational field, the Einstein field equations must be solved. These equations are extremely complex and outside of a small set of idealized situations, they are impossible to solve directly. However, to make physical predictions or understand physical phenomena, it is often enough to find approximate solutions that are governed by a simpler set of equations. For example, Newtonian gravity approximates general relativity very well in regimes where the typical velocity of the gravitating matter is small compared to the speed of light. Indeed, Newtonian gravity successfully explains much of the behaviour of our solar system and is a simpler theory of gravity. However, for many situations of interest ranging from binary star systems to GPS satellites, the Newtonian approximation is not accurate enough; general relativistic effects must be included. This desire to include relativistic corrections to Newtonian gravity lead to the development of the post-Newtonian expansions.
Alignment of time course gene expression data sets using Hidden Markov Models
12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide

Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.
Mathematical modelling of lobster populations in South Australia
12:10 Mon 12 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr John Feenstra :: University of Adelaide

Just how many lobsters are there hanging around the South Australian coastline? How is this number changing over time? What is the demographic breakdown of this number? And what does it matter? Find out the answers to these questions in my upcoming talk. I will provide a brief flavour of the kinds of quantitative methods involved, showcasing relevant applications of regression, population modelling, estimation, as well as simulation. A product of these analyses are biological performance indicators which are used by government to help decide on fishery controls such as yearly total allowable catch quotas. This assists in maintaining the sustainability of the fishery and hence benefits both the fishers and the lobsters they catch.
Estimating transmission parameters for the swine flu pandemic
15:10 Fri 23 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Kathryn Glass :: Australian National University

Media...
Following the onset of a new strain of influenza with pandemic potential, policy makers need specific advice on how fast the disease is spreading, who is at risk, and what interventions are appropriate for slowing transmission. Mathematical models play a key role in comparing interventions and identifying the best response, but models are only as good as the data that inform them. In the early stages of the 2009 swine flu outbreak, many researchers estimated transmission parameters - particularly the reproduction number - from outbreak data. These estimates varied, and were often biased by data collection methods, misclassification of imported cases or as a result of early stochasticity in case numbers. I will discuss a number of the pitfalls in achieving good quality parameter estimates from early outbreak data, and outline how best to avoid them. One of the early indications from swine flu data was that children were disproportionately responsible for disease spread. I will introduce a new method for estimating age-specific transmission parameters from both outbreak and seroprevalence data. This approach allows us to take account of empirical data on human contact patterns, and highlights the need to allow for asymmetric mixing matrices in modelling disease transmission between age groups. Applied to swine flu data from a number of different countries, it presents a consistent picture of higher transmission from children.
Statistical analysis of school-based student performance data
12:10 Mon 10 Oct, 2011 :: 5.57 Ingkarni Wardli :: Ms Jessica Tan :: University of Adelaide

Join me in the journey of being a statistician for 15 minutes of your day (if you are not already one) and experience the task of data cleaning without having to get your own hands dirty. Most of you may have sat the Basic Skills Tests when at school or know someone who currently has to do the NAPLAN (National Assessment Program - Literacy and Numeracy) tests. Tests like these assess student progress and can be used to accurately measure school performance. In trying to answer the research question: "what conclusions about student progress and school performance can be drawn from NAPLAN data or data of a similar nature, using mathematical and statistical modelling and analysis techniques?", I have uncovered some interesting results about the data in my initial data analysis which I shall explain in this talk.
Statistical modelling for some problems in bioinformatics
11:10 Fri 14 Oct, 2011 :: B.17 Ingkarni Wardli :: Professor Geoff McLachlan :: The University of Queensland

Media...
In this talk we consider some statistical analyses of data arising in bioinformatics. The problems include the detection of differential expression in microarray gene-expression data, the clustering of time-course gene-expression data and, lastly, the analysis of modern-day cytometric data. Extensions are considered to the procedures proposed for these three problems in McLachlan et al. (Bioinformatics, 2006), Ng et al. (Bioinformatics, 2006), and Pyne et al. (PNAS, 2009), respectively. The latter references are available at http://www.maths.uq.edu.au/~gjm/.
On the role of mixture distributions in the modelling of heterogeneous data
15:10 Fri 14 Oct, 2011 :: 7.15 Ingkarni Wardli :: Prof Geoff McLachlan :: University of Queensland

Media...
We consider the role that finite mixture distributions have played in the modelling of heterogeneous data, in particular for clustering continuous data via mixtures of normal distributions. A very brief history is given starting with the seminal papers by Day and Wolfe in the sixties before the appearance of the EM algorithm. It was the publication in 1977 of the latter algorithm by Dempster, Laird, and Rubin that greatly stimulated interest in the use of finite mixture distributions to model heterogeneous data. This is because the fitting of mixture models by maximum likelihood is a classic example of a problem that is simplified considerably by the EM's conceptual unification of maximum likelihood estimation from data that can be viewed as being incomplete. In recent times there has been a proliferation of applications in which the number of experimental units n is comparatively small but the underlying dimension p is extremely large as, for example, in microarray-based genomics and other high-throughput experimental approaches. Hence there has been increasing attention given not only in bioinformatics and machine learning, but also in mainstream statistics, to the analysis of complex data in this situation where n is small relative to p. The latter part of the talk shall focus on the modelling of such high-dimensional data using mixture distributions.
Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis
15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales

Media...
A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.
Space of 2D shapes and the Weil-Petersson metric: shapes, ideal fluid and Alzheimer's disease
13:10 Fri 25 Nov, 2011 :: B.19 Ingkarni Wardli :: Dr Sergey Kushnarev :: National University of Singapore

The Weil-Petersson metric is an exciting metric on a space of simple plane curves. In this talk the speaker will introduce the shape space and demonstrate the connection with the Euler-Poincare equations on the group of diffeomorphisms (EPDiff). A numerical method for finding geodesics between two shapes will be demonstrated and applied to the surface of the hippocampus to study the effects of Alzheimer's disease. As another application the speaker will discuss how to do statistics on the shape space and what should be done to improve it.
Collision and instability in a rotating fluid-filled torus
15:10 Mon 12 Dec, 2011 :: Benham Lecture Theatre :: Dr Richard Clarke :: The University of Auckland

The simple experiment discussed in this talk, first conceived by Madden and Mullin (JFM, 1994) as part of their investigations into the non-uniqueness of decaying turbulent flow, consists of a fluid-filled torus which is rotated in an horizontal plane. Turbulence within the contained flow is triggered through a rapid change in its rotation rate. The flow instabilities which transition the flow to this turbulent state, however, are truly fascinating in their own right, and form the subject of this presentation. Flow features observed in both UK- and Auckland-based experiments will be highlighted, and explained through both boundary-layer analysis and full DNS. In concluding we argue that this flow regime, with its compact geometry and lack of cumbersome flow entry effects, presents an ideal regime in which to study many prototype flow behaviours, very much in the same spirit as Taylor-Couette flow.
Forecasting electricity demand distributions using a semiparametric additive model
15:10 Fri 16 Mar, 2012 :: B.21 Ingkarni Wardli :: Prof Rob Hyndman :: Monash University

Media...
Electricity demand forecasting plays an important role in short-term load allocation and long-term planning for future generation facilities and transmission augmentation. Planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty. Electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usage. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays. I will describe a comprehensive forecasting solution designed to take all the available information into account, and to provide forecast distributions from a few hours ahead to a few decades ahead. We use semi-parametric additive models to estimate the relationships between demand and the covariates, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks. The model is being used by the state energy market operators and some electricity supply companies to forecast the probability distribution of electricity demand in various regions of Australia. It also underpinned the Victorian Vision 2030 energy strategy.
Fast-track study of viscous flow over topography using 'Smoothed Particle Hydrodynamics'
12:10 Mon 16 Apr, 2012 :: 5.57 Ingkarni Wardli :: Mr Stephen Wade :: University of Adelaide

Media...
Motivated by certain tea room discussions, I am going to (attempt to) model the flow of a viscous fluid under gravity over conical topography. The method used is 'Smoothed Particle Hydrodynamics' (SPH), which is an easy-to-use but perhaps limited-accuracy computational method. The model could be extended to include solidification and thermodynamic effects that can also be implemented within the framework of SPH, and this has the obvious practical application to the modelling of the coverage of ice cream with ice magic, I mean, lava flows. If I fail to achieve this within the next 4 weeks, I will have to go through a talk on SPH that I gave during honours instead.
Mathematical modelling of the surface adsorption for methane on carbon nanostructures
12:10 Mon 30 Apr, 2012 :: 5.57 Ingkarni Wardli :: Mr Olumide Adisa :: University of Adelaide

Media...
In this talk, methane (CH4) adsorption is investigated on both graphite and in the region between two aligned single-walled carbon nanotubes, which we refer to as the groove site. The Lennard–Jones potential function and the continuous approximation is exploited to determine surface binding energies between a single CH4 molecule and graphite and between a single CH4 and two aligned single-walled carbon nanotubes. The modelling indicates that for a CH4 molecule interacting with graphite, the binding energy of the system is minimized when the CH4 carbon is 3.83 angstroms above the surface of the graphitic carbon, while the binding energy of the CH4–groove site system is minimized when the CH4 carbon is 5.17 angstroms away from the common axis shared by the two aligned single-walled carbon nanotubes. These results confirm the current view that for larger groove sites, CH4 molecules in grooves are likely to move towards the outer surfaces of one of the single-walled carbon nanotubes. The results presented in this talk are computationally efficient and are in good agreement with experiments and molecular dynamics simulations, and show that CH4 adsorption on graphite and groove surfaces is more favourable at lower temperatures and higher pressures.
Multiscale models of collective cell behaviour: Linear or nonlinear diffusion?
15:10 Fri 4 May, 2012 :: B.21 Ingkarni Wardli :: Dr Matthew Simpson :: Queensland University of Technology

Media...
Continuum diffusion models are often used to represent the collective motion of cell populations. Most previous studies have simply used linear diffusion to represent collective cell spreading, while others found that degenerate nonlinear diffusion provides a better match to experimental cell density profiles. There is no guidance available in the mathematical biology literature with regard to which approach is more appropriate. Furthermore, there is no knowledge of particular experimental measurements that can be made to distinguish between situations where these two models are appropriate. We provide a link between individual-based and continuum models using a multiscale approach in which we analyse the collective motion of a population of interacting agents in a generalized lattice-based exclusion process. For round agents that occupy a single lattice site, we find that the relevant continuum description is a linear diffusion equation, whereas for elongated rod-shaped agents that occupy L adjacent lattice sites we find that the relevant continuum description is a nonlinear diffusion equation related to the porous media equation. We show that there are several reasonable approaches for dealing with agent size effects, and that these different approaches are related mathematically through the concept of mean action time. We extend our results to consider proliferation and travelling waves where greater care must be taken to ensure that the continuum model replicates the discrete process. This is joint work with Dr Ruth Baker (Oxford) and Dr Scott McCue (QUT).
Modelling protective anti-tumour immunity using a hybrid agent-based and delay differential equation approach
15:10 Fri 11 May, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Kim :: University of Sydney

Media...
Although cancers seem to consistently evade current medical treatments, the body's immune defences seem quite effective at controlling incipient tumours. Understanding how our immune systems provide such protection against early-stage tumours and how this protection could be lost will provide insight into designing next-generation immune therapies against cancer. To engage this problem, we formulate a mathematical model of the immune response against small, incipient tumours. The model considers the initial stimulation of the immune response in lymph nodes and the resulting immune attack on the tumour and is formulated as a hybrid agent-based and delay differential equation model.
On the full holonomy group of special Lorentzian manifolds
13:10 Fri 25 May, 2012 :: Napier LG28 :: Dr Thomas Leistner :: University of Adelaide

The holonomy group of a semi-Riemannian manifold is defined as the group of parallel transports along loops based at a point. Its connected component, the `restricted holonomy group', is given by restricting in this definition to contractible loops. The restricted holonomy can essentially be described by its Lie algebra and many classification results are obtained in this way. In contrast, the `full' holonomy group is a more global object and classification results are out of reach. In the talk I will describe recent results with H. Baum and K. Laerz (both HU Berlin) about the full holonomy group of so-called `indecomposable' Lorentzian manifolds. I will explain a construction method that arises from analysing the effects on holonomy when dividing the manifold by the action of a properly discontinuous group of isometries and present several examples of Lorentzian manifolds with disconnected holonomy groups.
The change of probability measure for jump processes
12:10 Mon 28 May, 2012 :: 5.57 Ingkarni Wardli :: Mr Ahmed Hamada :: University of Adelaide

Media...
In financial derivatives pricing theory, it is very common to change the probability measure from historical measure "real world" into a Risk-Neutral measure as a development of the non arbitrage condition. Girsanov theorem is the most known example of this technique and is used when prices randomness is modelled by Brownian motions. Other genuine candidates for modelling market randomness that have proved efficiency in recent literature are jump process, so how can a change of measure be performed for such processes? This talk will address this question by introducing the non arbitrage condition, discussing Girsanov theorem for diffusion and jump processes and presenting a concrete example.
Model turbulent floods based upon the Smagorinski large eddy closure
12:10 Mon 4 Jun, 2012 :: 5.57 Ingkarni Wardli :: Mr Meng Cao :: University of Adelaide

Media...
Rivers, floods and tsunamis are often very turbulent. Conventional models of such environmental fluids are typically based on depth-averaged inviscid irrotational flow equations. We explore changing such a base to the turbulent Smagorinski large eddy closure. The aim is to more appropriately model the fluid dynamics of such complex environmental fluids by using such a turbulent closure. Large changes in fluid depth are allowed. Computer algebra constructs the slow manifold of the flow in terms of the fluid depth h and the mean turbulent lateral velocities u and v. The major challenge is to deal with the nonlinear stress tensor in the Smagorinski closure. The model integrates the effects of inertia, self-advection, bed drag, gravitational forcing and turbulent dissipation with minimal assumptions. Although the resultant model is close to established models, the real outcome is creating a sound basis for the modelling so others, in their modelling of more complex situations, can systematically include more complex physical processes.
Adventures with group theory: counting and constructing polynomial invariants for applications in quantum entanglement and molecular phylogenetics
15:10 Fri 8 Jun, 2012 :: B.21 Ingkarni Wardli :: Dr Peter Jarvis :: The University of Tasmania

Media...
In many modelling problems in mathematics and physics, a standard challenge is dealing with several repeated instances of a system under study. If linear transformations are involved, then the machinery of tensor products steps in, and it is the job of group theory to control how the relevant symmetries lift from a single system, to having many copies. At the level of group characters, the construction which does this is called PLETHYSM. In this talk all this will be contextualised via two case studies: entanglement invariants for multipartite quantum systems, and Markov invariants for tree reconstruction in molecular phylogenetics. By the end of the talk, listeners will have understood why Alice, Bob and Charlie love Cayley's hyperdeterminant, and they will know why the three squangles -- polynomial beasts of degree 5 in 256 variables, with a modest 50,000 terms or so -- can tell us a lot about quartet trees!
Drawing of Viscous Threads with Temperature-dependent Viscosity
14:10 Fri 10 Aug, 2012 :: Engineering North N218 :: Dr Jonathan Wylie :: City University of Hong Kong

The drawing of viscous threads is important in a wide range of industrial applications and is a primary manufacturing process in the optical fiber and textile industries. Most of the materials used in these processes have viscosities that vary extremely strongly with temperature. We investigate the role played by viscous heating in the drawing of viscous threads. Usually, the effects of viscous heating and inertia are neglected because the parameters that characterize them are typically very small. However, by performing a detailed theoretical analysis we surprisingly show that even very small amounts of viscous heating can lead to a runaway phenomena. On the other hand, inertia prevents runaway, and the interplay between viscous heating and inertia results in very complicated dynamics for the system. Even more surprisingly, in the absence of viscous heating, we find that a new type of instability can occur when a thread is heated by a radiative heat source. By analyzing an asymptotic limit of the Navier-Stokes equation we provide a theory that describes the nature of this instability and explains the seemingly counterintuitive behavior.
Infectious diseases modelling: from biology to public health policy
15:10 Fri 24 Aug, 2012 :: B.20 Ingkarni Wardli :: Dr James McCaw :: The University of Melbourne

Media...
The mathematical study of human-to-human transmissible pathogens has established itself as a complementary methodology to the traditional epidemiological approach. The classic susceptible--infectious--recovered model paradigm has been used to great effect to gain insight into the epidemiology of endemic diseases such as influenza and pertussis, and the emergence of novel pathogens such as SARS and pandemic influenza. The modelling paradigm has also been taken within the host and used to explain the within-host dynamics of viral (or bacterial or parasite) infections, with implications for our understanding of infection, emergence of drug resistance and optimal drug-interventions. In this presentation I will provide an overview of the mathematical paradigm used to investigate both biological and epidemiological infectious diseases systems, drawing on case studies from influenza, malaria and pertussis research. I will conclude with a summary of how infectious diseases modelling has assisted the Australian government in developing its pandemic preparedness and response strategies.
Electrokinetics of concentrated suspensions of spherical particles
15:10 Fri 28 Sep, 2012 :: B.21 Ingkarni Wardli :: Dr Bronwyn Bradshaw-Hajek :: University of South Australia

Electrokinetic techniques are used to gather specific information about concentrated dispersions such as electronic inks, mineral processing slurries, pharmaceutical products and biological fluids (e.g. blood). But, like most experimental techniques, intermediate quantities are measured, and consequently the method relies explicitly on theoretical modelling to extract the quantities of experimental interest. A self-consistent cell-model theory of electrokinetics can be used to determine the electrical conductivity of a dense suspension of spherical colloidal particles, and thereby determine the quantities of interest (such as the particle surface potential). The numerical predictions of this model compare well with published experimental results. High frequency asymptotic analysis of the cell-model leads to some interesting conclusions.
Towards understanding fundamental interactions for nanotechnology
15:10 Fri 5 Oct, 2012 :: B.20 Ingkarni Wardli :: Dr Doreen Mollenhauer :: MacDiarmid Institute for Advanced Materials and Nanotechnology, Wellington

Media...
Multiple simultaneous interactions show unique collective properties that are qualitatively different from properties displayed by their monovalent constituents. Multivalent interactions play an important role for the self-organization of matter, recognition processes and signal transduction. A broad understanding of these interactions is therefore crucial in order to answer central questions and make new developments in the field of biotechnology and material science. In the framework of a joint experimental and theoretical project we study the electronic effects in monovalent and multivalent interactions by doing quantum chemical calculations. The particular interest of our investigations is in organic molecules interacting with gold nanoparticles or graphene. The main purpose is to analyze the nature of multivalent bonding in comparison to monovalent interaction.
AD Model Builder and the estimation of lobster abundance
12:10 Mon 22 Oct, 2012 :: B.21 Ingkarni Wardli :: Mr John Feenstra :: University of Adelaide

Media...
Determining how many millions of lobsters reside in our waters and how it changes over time is a central aim of lobster stock assessment. ADMB is powerful optimisation software to model and solve complex non-linear problems using automatic differentiation and plays a major role in SA and worldwide in fisheries stock assessment analyses. In this talk I will provide a brief description of an example modelling problem, key features and use of ADMB.
Thin-film flow in helically-wound channels with small torsion
15:10 Fri 26 Oct, 2012 :: B.21 Ingkarni Wardli :: Dr Yvonne Stokes :: University of Adelaide

The study of flow in open helically-wound channels has application to many natural and industrial flows. We will consider laminar flow down helically-wound channels of rectangular cross section and with small torsion, in which the fluid depth is small. Assuming a steady-state flow that is independent of position along the axis of the channel, the flow solution may be determined in the two-dimensional cross section of the channel. A thin-film approximation yields explicit expressions for the fluid velocity in terms of the free-surface shape. The latter satisfies an interesting non-linear ordinary differential equation that, for a channel of rectangular cross section, has an analytical solution. The predictions of the thin-film model are shown to be in good agreement with much more computationally intensive solutions of the small-helix-torsion Navier-Stokes equations. This work has particular relevance to spiral particle separators used in the minerals processing industry. Early work on modelling of particle-laden thin-film flow in spiral channels will also be discussed.
Thin-film flow in helically-wound channels with small torsion
15:10 Fri 26 Oct, 2012 :: B.21 Ingkarni Wardli :: Dr Yvonne Stokes :: University of Adelaide

The study of flow in open helically-wound channels has application to many natural and industrial flows. We will consider laminar flow down helically-wound channels of rectangular cross section and with small torsion, in which the fluid depth is small. Assuming a steady-state flow that is independent of position along the axis of the channel, the flow solution may be determined in the two-dimensional cross section of the channel. A thin-film approximation yields explicit expressions for the fluid velocity in terms of the free-surface shape. The latter satisfies an interesting non-linear ordinary differential equation that, for a channel of rectangular cross section, has an analytical solution. The predictions of the thin-film model are shown to be in good agreement with much more computationally intensive solutions of the small-helix-torsion Navier-Stokes equations. This work has particular relevance to spiral particle separators used in the minerals processing industry. Early work on modelling of particle-laden thin-film flow in spiral channels will also be discussed.
A multiscale approach to reaction-diffusion processes in domains with microstructure
15:10 Fri 15 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Malte Peter :: University of Augsburg

Media...
Reaction-diffusion processes occur in many materials with microstructure such as biological cells, steel or concrete. The main difficulty in modelling and simulating accurately such processes is to account for the fine microstructure of the material. One method of upscaling multi-scale problems, which has proven reliable for obtaining feasible macroscopic models, is the method of periodic homogenisation. The talk will give an introduction to multi-scale modelling of chemical mechanisms in domains with microstructure as well as to the method of periodic homogenisation. Moreover, a few aspects of solving the resulting systems of equations numerically will also be discussed.
The boundary conditions for macroscale modelling of a discrete diffusion system with periodic diffusivity
12:10 Mon 29 Apr, 2013 :: B.19 Ingkarni Wardli :: Chen Chen :: University of Adelaide

Media...
Many mathematical and engineering problems have a multiscale nature. There are a vast of theories supporting multiscale modelling on infinite domain, such as homogenization theory and centre manifold theory. To date, there are little consideration of the correct boundary conditions to be used at the edge of macroscale model. In this seminar, I will present how to derive macroscale boundary conditions for the diffusion system.
Filtering Theory in Modelling the Electricity Market
12:10 Mon 6 May, 2013 :: B.19 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide

Media...
In mathematical finance, as in many other fields where applied mathematics is a powerful tool, we assume that a model is good enough when it captures different sources of randomness affecting the quantity of interests, which in this case is the electricity prices. The power market is very different from other markets in terms of the randomness sources that can be observed in the prices feature and evolution. We start from suggesting a new model that simulates the electricity prices, this new model is constructed by adding a periodicity term, a jumps terms and a positives mean reverting term. The later term is driven by a non-observable Markov process. So in order to prices some financial product, we have to use some of the filtering theory to deal with the non-observable process, these techniques are gaining very much of interest from practitioners and researchers in the field of financial mathematics.
Progress in the prediction of buoyancy-affected turbulence
15:10 Fri 17 May, 2013 :: B.18 Ingkarni Wardli :: Dr Daniel Chung :: University of Melbourne

Media...
Buoyancy-affected turbulence represents a significant challenge to our understanding, yet it dominates many important flows that occur in the ocean and atmosphere. The presentation will highlight some recent progress in the characterisation, modelling and prediction of buoyancy-affected turbulence using direct and large-eddy simulations, along with implications for the characterisation of mixing in the ocean and the low-cloud feedback in the atmosphere. Specifically, direct numerical simulation data of stratified turbulence will be employed to highlight the importance of boundaries in the characterisation of turbulent mixing in the ocean. Then, a subgrid-scale model that captures the anisotropic character of stratified mixing will be developed for large-eddy simulation of buoyancy-affected turbulence. Finally, the subgrid-scale model is utilised to perform a systematic large-eddy simulation investigation of the archetypal low-cloud regimes, from which the link between the lower-tropospheric stability criterion and the cloud fraction interpreted.
Pulsatile Flow
12:10 Mon 20 May, 2013 :: B.19 Ingkarni Wardli :: David Wilke :: University of Adelaide

Media...
Blood flow within the human arterial system is inherently unsteady as a consequence of the pulsations of the heart. The unsteady nature of the flow gives rise to a number of important flow features which may be critical in understanding pathologies of the cardiovascular system. For example, it is believed that large oscillations in wall shear stress may enhance the effects of artherosclerosis, among other pathologies. In this talk I will present some of the basic concepts of pulsatile flow and follow the analysis first performed by J.R. Womersley in his seminal 1955 paper.
Multiscale modelling couples patches of wave-like simulations
12:10 Mon 27 May, 2013 :: B.19 Ingkarni Wardli :: Meng Cao :: University of Adelaide

Media...
A multiscale model is proposed to significantly reduce the expensive numerical simulations of complicated waves over large spatial domains. The multiscale model is built from given microscale simulations of complicated physical processes such as sea ice or turbulent shallow water. Our long term aim is to enable macroscale simulations obtained by coupling small patches of simulations together over large physical distances. This initial work explores the coupling of patch simulations of wave-like pdes. With the line of development being to water waves we discuss the dynamics of two complementary fields called the 'depth' h and 'velocity' u. A staggered grid is used for the microscale simulation of the depth h and velocity u. We introduce a macroscale staggered grid to couple the microscale patches. Linear or quadratic interpolation provides boundary conditions on the field in each patch. Linear analysis of the whole coupled multiscale system establishes that the resultant macroscale dynamics is appropriate. Numerical simulations support the linear analysis. This multiscale method should empower the feasible computation of large scale simulations of wave-like dynamics with complicated underlying physics.
Thin-film flow in helical channels
12:10 Mon 9 Sep, 2013 :: B.19 Ingkarni Wardli :: David Arnold :: University of Adelaide

Media...
Spiral particle separators are used in the mineral processing industry to refine ores. A slurry, formed by mixing crushed ore with a fluid, is run down a helical channel and at the end of the channel, the particles end up sorted in different sections of the channel. Design of such devices is largely experimentally based, and mathematical modelling of flow in helical channels is relatively limited. In this talk, I will outline some of the work that I have been doing on thin-film flow in helical channels.
Modelling the South Australian garfish population slice by slice.
12:10 Mon 14 Oct, 2013 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide

Media...
In this talk I will provide a taste of how South Australian garfish populations are modelled. The role and importance of garfish 'slices' will be explained and how these help produce important reporting quantities of yearly recruitment, legal-size biomass, and exploitation rate within a framework of an age and length based population model.
Model Misspecification due to Site Specific Rate Heterogeneity: how is tree inference affected?
12:10 Mon 21 Oct, 2013 :: B.19 Ingkarni Wardli :: Stephen Crotty :: University of Adelaide

Media...
In this talk I'll answer none of the questions you ever had about phylogenetics, but hopefully some you didn't. I'll be giving this presentation at a phylogenetics conference in 3 weeks, so sorry it is a little light on background. You've been warned! Phlyogeneticists have long recognised that different sites in a DNA sequence can experience different rates of nucleotide substitution, and many models have been developed to accommodate this rate heterogeneity. But what happens when a single site exhibits rate heterogeneity along different branches of an evolutionary tree? In this talk I'll introduce the notion of Site Specific Rate Heterogeneity (SSRH) and investigate a simple case, looking at the impact of SSRH on inference via maximum parsimony, neighbour joining and maximum likelihood.
Modelling and optimisation of group dose-response challenge experiments
12:10 Mon 28 Oct, 2013 :: B.19 Ingkarni Wardli :: David Price :: University of Adelaide

Media...
An important component of scientific research is the 'experiment'. Effective design of these experiments is important and, accordingly, has received significant attention under the heading 'optimal experimental design'. However, until recently, little work has been done on optimal experimental design for experiments where the underlying process can be modelled by a Markov chain. In this talk, I will discuss some of the work that has been done in the field of optimal experimental design for Markov Chains, and some of the work that I have done in applying this theory to dose-response challenge experiments for the bacteria Campylobacter jejuni in chickens.
A gentle introduction to bubble evolution in Hele-Shaw flows
15:10 Fri 22 Nov, 2013 :: 5.58 (Ingkarni Wardli) :: Dr Scott McCue :: QUT

A Hele-Shaw cell is easy to make and serves as a fun toy for an applied mathematician to play with. If we inject air into a Hele-Shaw cell that is otherwise filled with viscous fluid, we can observe a bubble of air growing in size. The process is highly unstable, and the bubble boundary expands in an uneven fashion, leading to striking fingering patterns (look up Hele-Shaw cell or Saffman-Taylor instability on YouTube). From a mathematical perspective, modelling these Hele-Shaw flows is interesting because the governing equations are sufficiently ``simple'' that a considerable amount of analytical progress is possible. Indeed, there is no other context in which (genuinely) two-dimensional moving boundary problems are so tractable. More generally, Hele-Shaw flows are important as they serve as prototypes for more complicated (and important) physical processes such as crystal growth and diffusion limited aggregation. I will give an introduction to some of the main ideas and summarise some of my present research in this area.
Buoyancy driven exchange flows in the nearshore regions of lakes and reservoirs
15:10 Mon 2 Dec, 2013 :: 5.58 (Ingkarni Wardli) :: Professor John Patterson :: University of Sydney

Natural convection is the flow driven by differences in density, and is ubiquitous in nature and industry. It is the source of most environmental flows, and is the basis for almost all industrial heat exchange processes. It operates on both massive and micro scales. It is usually considered as a flow driven by temperature gradients, but could equally be from a gradient in any density determining property - salinity is one obvious example. It also depends on gravity; so magnetohydrodynamics becomes relevant as well. One particular interesting and environmentally relevant flow is the exchange flow in the nearshore regions of lakes and reservoirs. This occurs because of the effects of a decreasing depth approaching the shore resulting laterally unequal heat loss and heat gain during the diurnal cooling and heating cycle. This presentation will discuss some of the results obtained by the Natural Convection Group at Sydney University in analytical, numerical and experimental investigations of this mechanism, and the implications for lake water quality.
The effects of pre-existing immunity
15:10 Fri 7 Mar, 2014 :: B.18 Ingkarni Wardli :: Associate Professor Jane Heffernan :: York University, Canada

Media...
Immune system memory, also called immunity, is gained as a result of primary infection or vaccination, and can be boosted after vaccination or secondary infections. Immunity is developed so that the immune system is primed to react and fight a pathogen earlier and more effectively in secondary infections. The effects of memory, however, on pathogen propagation in an individual host (in-host) and a population (epidemiology) are not well understood. Mathematical models of infectious diseases, employing dynamical systems, computer simulation and bifurcation analysis, can provide projections of pathogen propagation, show outcomes of infection and help inform public health interventions. In the Modelling Infection and Immunity (MI^2) lab, we develop and study biologically informed mathematical models of infectious diseases at both levels of infection, and combine these models into comprehensive multi-scale models so that the effects of individual immunity in a population can be determined. In this talk we will discuss some of the interesting mathematical phenomenon that arise in our models, and show how our results are directly applicable to what is known about the persistence of infectious diseases.
CARRYING CAPACITY FOR FINFISH AQUACULTURE IN SPENCER GULF: RAPID ASSESSMENT USING HYDRODYNAMIC AND NEAR-FIELD, SEMI - ANALYTIC SOLUTIONS
15:10 Fri 11 Apr, 2014 :: 5.58 Ingkarni Wardli :: Associate Professor John Middleton :: SARDI Aquatic Sciences and University of Adelaide

Aquaculture farming involves daily feeding of finfish and a subsequent excretion of nutrients into Spencer Gulf. Typically, finfish farming is done in six or so 50m diameter cages and over 600m X 600m lease sites. To help regulate the industry, it is desired that the finfish feed rates and the associated nutrient flux into the ocean are determined such that the maximum nutrient concentration c does not exceed a prescribed value (say cP) for ecosystem health. The prescribed value cP is determined by guidelines from the E.P.A. The concept is known as carrying capacity since limiting the feed rates limits the biomass of the farmed finfish. Here, we model the concentrations that arise from a constant input flux (F) of nutrients in a source region (the cage or lease) using the (depth-averaged) two dimensional, advection diffusion equation for constant and sinusoidal (tides) currents. Application of the divergence theorem to this equation results in a new scale estimate of the maximum flux F (and thus feed rate) that is given by F= cP /T* (1) where cP is the maximum allowed concentration and T* is a new time scale of “flushing” that involves both advection and diffusion. The scale estimate (1) is then shown to compare favourably with mathematically exact solutions of the advection diffusion equation that are obtained using Green’s functions and Fourier transforms. The maximum nutrient flux and associated feed rates are then estimated everywhere in Spencer Gulf through the development and validation of a hydrodynamic model. The model provides seasonal averages of the mean currents U and horizontal diffusivities KS that are needed to estimate T*. The diffusivities are estimated from a shear dispersal model of the tides which are very large in the gulf. The estimates have been provided to PIRSA Fisheries and Aquaculture to assist in the sustainable expansion of finfish aquaculture.
Outlier removal using the Bayesian information criterion for group-based trajectory modelling
12:10 Mon 28 Apr, 2014 :: B.19 Ingkarni Wardli :: Chris Davies :: University of Adelaide

Media...
Attributes measured longitudinally can be used to define discrete paths of measurements, or trajectories, for each individual in a given population. Group-based trajectory modelling methods can be used to identify subgroups of trajectories within a population, such that trajectories that are grouped together are more similar to each other than to trajectories in distinct groups. Existing methods generally allocate every individual trajectory into one of the estimated groups. However this does not allow for the possibility that some individuals may be following trajectories so different from the rest of the population that they should not be included in a group-based trajectory model. This results in these outlying trajectories being treated as though they belong to one of the groups, distorting the estimated trajectory groups and any subsequent analyses that use them. We have developed an algorithm for removing outlying trajectories based on the maximum change in Bayesian information criterion (BIC) due to removing a single trajectory. As well as deciding which trajectory to remove, the number of groups in the model can also change. The decision to remove an outlying trajectory is made by comparing the log-likelihood contributions of the observations to those of simulated samples from the estimated group-based trajectory model. In this talk the algorithm will be detailed and an application of its use will be demonstrated.
Ice floe collisions in the Marginal Ice Zone
12:10 Mon 12 May, 2014 :: B.19 Ingkarni Wardli :: Lucas Yiew :: University of Adelaide

Media...
In an era of climate change, it is becoming increasingly important to model the dynamics of sea-ice cover in the polar regions. The Marginal Ice Zone represents a vast region of ice cover strongly influenced by the effects of ocean waves. As ocean waves penetrate this region, wave energy is progressively dispersed through energy dissipative mechanisms such as collisions between ice floes (discrete chunks of ice). In this talk I will discuss the mathematical models required to build a collision model, and the validation of these models with experimental results.
Group meeting
15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide

Meng Cao:: Multiscale modelling couples patches of nonlinear wave-like simulations :: Abstract: The multiscale gap-tooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gap-tooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gap-tooth scheme to the case of nonlinear microscale simulations of wave-like systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigen-analysis indicates that the resultant gap-tooth scheme empowers feasible computation of large scale simulations of wave-like dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dam-breaking waves by the gap-tooth scheme. Comparison between a gap-tooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gap-tooth scheme feasibly computes large scale wave-like dynamics with computational savings. Trent Mattner :: Coupled atmosphere-fire simulations of the Canberra 2003 bushfires using WRF-Sfire :: Abstract: The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fire-atmosphere-topography interactions that occurred, including lee-slope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fire-weather simulations of the Canberra fires using WRF-SFire. In these simulations, a fire-behaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics.
Modelling the mean-field behaviour of cellular automata
12:10 Mon 4 Aug, 2014 :: B.19 Ingkarni Wardli :: Kale Davies :: University of Adelaide

Media...
Cellular automata (CA) are lattice-based models in which agents fill the lattice sites and behave according to some specified rule. CA are particularly useful when modelling cell behaviour and as such many people consider CA model in which agents undergo motility and proliferation type events. We are particularly interested in predicting the average behaviour of these models. In this talk I will show how a system of differential equations can be derived for the system and discuss the difficulties that arise in even the seemingly simple case of a CA with motility and proliferation.
Hydrodynamics and rheology of self-propelled colloids
15:10 Fri 8 Aug, 2014 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide

The sub-cellular world has many components in common with soft condensed matter systems (polymers, colloids and liquid crystals). But it has novel properties, not present in traditional complex fluids, arising from a rich spectrum of non-equilibrium behavior: flocking, chemotaxis and bioconvection. The talk is divided into two parts. In the first half, we will (get an idea on how to) derive a hydrodynamic model for self-propelled particles of an arbitrary shape from first principles, in a sufficiently dilute suspension limit, moving in a 3-dimensional space inside a viscous solvent. The model is then restricted to particles with ellipsoidal geometry to quantify the interplay of the long-range excluded volume and the short-range self-propulsion effects. The expression for the constitutive stresses, relating the kinetic theory with the momentum transport equations, are derived using a combination of the virtual work principle (for extra elastic stresses) and symmetry arguments (for active stresses). The second half of the talk will highlight on my current numerical expertise. In particular we will exploit a specific class of spectral basis functions together with RK4 time-stepping to determine the dynamical phases/structures as well as phase-transitions of these ellipsoidal clusters. We will also discuss on how to define the order (or orientation) of these clusters and understand the other rheological quantities.
Modelling biological gel mechanics
12:10 Mon 8 Sep, 2014 :: B.19 Ingkarni Wardli :: James Reoch :: University of Adelaide

Media...
The behaviour of gels such as collagen is the result of complex interactions between mechanical and chemical forces. In this talk, I will outline the modelling approaches we are looking at in order to incorporate the influence of cell behaviour alongside chemical potentials, and the various circumstances which lead to gel swelling and contraction.
Inferring absolute population and recruitment of southern rock lobster using only catch and effort data
12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide

Media...
Abundance estimates from a data-limited version of catch survey analysis are compared to those from a novel one-parameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex data-rich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the data-limited methods is proposed as the most robust approach. A more statistically conventional error-in-variables approach may also be touched upon if enough time.
Spectral asymptotics on random Sierpinski gaskets
12:10 Fri 26 Sep, 2014 :: Ingkarni Wardli B20 :: Uta Freiberg :: Universitaet Stuttgart

Self similar fractals are often used in modeling porous media. Hence, defining a Laplacian and a Brownian motion on such sets describes transport through such materials. However, the assumption of strict self similarity could be too restricting. So, we present several models of random fractals which could be used instead. After recalling the classical approaches of random homogenous and recursive random fractals, we show how to interpolate between these two model classes with the help of so called V-variable fractals. This concept (developed by Barnsley, Hutchinson & Stenflo) allows the definition of new families of random fractals, hereby the parameter V describes the degree of `variability' of the realizations. We discuss how the degree of variability influences the geometric, analytic and stochastic properties of these sets. - These results have been obtained with Ben Hambly (University of Oxford) and John Hutchinson (ANU Canberra).
A Hybrid Markov Model for Disease Dynamics
12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide

Media...
Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic.
Modelling segregation distortion in multi-parent crosses
15:00 Mon 17 Nov, 2014 :: 5.57 Ingkarni Wardli :: Rohan Shah (joint work with B. Emma Huang and Colin R. Cavanagh) :: The University of Queensland

Construction of high-density genetic maps has been made feasible by low-cost high-throughput genotyping technology; however, the process is still complicated by biological, statistical and computational issues. A major challenge is the presence of segregation distortion, which can be caused by selection, difference in fitness, or suppression of recombination due to introgressed segments from other species. Alien introgressions are common in major crop species, where they have often been used to introduce beneficial genes from wild relatives. Segregation distortion causes problems at many stages of the map construction process, including assignment to linkage groups and estimation of recombination fractions. This can result in incorrect ordering and estimation of map distances. While discarding markers will improve the resulting map, it may result in the loss of genomic regions under selection or containing beneficial genes (in the case of introgression). To correct for segregation distortion we model it explicitly in the estimation of recombination fractions. Previously proposed methods introduce additional parameters to model the distortion, with a corresponding increase in computing requirements. This poses difficulties for large, densely genotyped experimental populations. We propose a method imposing minimal additional computational burden which is suitable for high-density map construction in large multi-parent crosses. We demonstrate its use modelling the known Sr36 introgression in wheat for an eight-parent complex cross.
Multiscale modelling of multicellular biological systems: mechanics, development and disease
03:10 Fri 6 Mar, 2015 :: Lower Napier LG24 :: Dr James Osborne :: University of Melbourne

When investigating the development and function of multicellular biological systems it is not enough to only consider the behaviour of individual cells in isolation. For example when studying tissue development, how individual cells interact, both mechanically and biochemically, influences the resulting tissues form and function. In this talk we present a multiscale modelling framework for simulating the development and function of multicellular biological systems (in particular tissues). Utilising the natural structural unit of the cell, the framework consists of three main scales: the tissue level (macro-scale); the cell level (meso-scale); and the sub-cellular level (micro-scale), with multiple interactions occurring between all scales. The cell level is central to the framework and cells are modelled as discrete interacting entities using one of a number of possible modelling paradigms, including lattice based models (cellular automata and cellular Potts) and off-lattice based models (cell centre and vertex based representations). The sub-cellular level concerns numerous metabolic and biochemical processes represented by interaction networks rendered stochastically or into ODEs. The outputs from such systems influence the behaviour of the cell level affecting properties such as adhesion and also influencing cell mitosis and apoptosis. At the tissue level we consider factors or restraints that influence the cells, for example the distribution of a nutrient or messenger molecule, which is represented by field equations, on a growing domain, with individual cells functioning as sinks and/or sources. The modular approach taken within the framework enables more realistic behaviour to be considered at each scale. This framework is implemented within the Open Source Chaste library (Cancer Heart and Soft Tissue Environment, (http://www.cs.ox.ac.uk/chaste/) and has been used to model biochemical and biomechanical interactions in various biological systems. In this talk we present the key ideas of the framework along with applications within the fields of development and disease.
Dynamic programming and optimal scoring rates in cricket
12:10 Mon 30 Mar, 2015 :: Napier LG29 :: Mingmei Teo :: University of Adelaide

Media...
With the cricket world cup having reached it's exciting conclusion and many world cup batting records being re-written at this world cup, we look back to the year 1987 where batting occurred at a more sedate pace and totals of 300+ were a rarity. In this talk, I'll discuss how dynamic programming has been applied to one-day cricket to determine optimal scoring rates and I'll also attempt to give a brief introduction into what is dynamic programming and a common method used to solve dynamic programming problems.
How do we quantify the filamentous growth in yeast colony?
12:10 Mon 30 Mar, 2015 :: Ingkarni Wardli 715 Conference Room :: Dr. Benjamin Binder :: School of Mathematical Sciences

Media...
In this talk we will develop a systematic method to measure the spatial patterning of colony morphology. A hybrid modelling approach of the growth process will also be discussed.
Group Meeting
15:10 Fri 24 Apr, 2015 :: N218 Engineering North :: Dr Ben Binder :: University of Adelaide

Talk (Dr Ben Binder): How do we quantify the filamentous growth in a yeast colony? Abstract: In this talk we will develop a systematic method to measure the spatial patterning of yeast colony morphology. The methods are applicable to other physical systems with circular spatial domains, for example, batch mixing fluid devices. A hybrid modelling approach of the yeast growth process will also be discussed. After the seminar, Ben will start a group discussion by sharing some information and experiences on attracting honours/PhD students to the group.
Haven't I seen you before? Accounting for partnership duration in infectious disease modeling
15:10 Fri 8 May, 2015 :: Level 7 Conference Room Ingkarni Wardli :: Dr Joel Miller :: Monash University

Media...

Our ability to accurately predict and explain the spread of an infectious disease is a significant factor in our ability to implement effective interventions. Our ability to accurately model disease spread depends on how accurately we capture the various effects. This is complicated by the fact that infectious disease spread involves a number of time scales. Four that are particularly relevant are: duration of infection in an individual, duration of partnerships between individuals, the time required for an epidemic to spread through the population, and the time required for the population structure to change (demographic or otherwise).

Mathematically simple models of disease spread usually make the implicit assumption that the duration of partnerships is by far the shortest time scale in the system. Thus they miss out on the tendency for infected individuals to deplete their local pool of susceptibles. Depending on the details of the disease in question, this effect may be significant.

I will discuss work done to reduce these assumptions for "SIR" (Susceptible-Infected-Recovered) diseases, which allows us to interpolate between populations which are static and populations which change partners rapidly in closed populations (no entry/exit). I will then discuss early results in applying these methods to diseases such as HIV in which the population time scales are relevant.

Group Meeting
15:10 Fri 29 May, 2015 :: EM 213 :: Dr Judy Bunder :: University of Adelaide

Talk : Patch dynamics for efficient exascale simulations Abstract Massive parallelisation has lead to a dramatic increase in available computational power. However, data transfer speeds have failed to keep pace and are the major limiting factor in the development of exascale computing. New algorithms must be developed which minimise the transfer of data. Patch dynamics is a computational macroscale modelling scheme which provides a coarse macroscale solution of a problem defined on a fine microscale by dividing the domain into many nonoverlapping, coupled patches. Patch dynamics is readily adaptable to massive parallelisation as each processor core can evaluate the dynamics on one, or a few, patches. However, patch coupling conditions interpolate across the unevaluated parts of the domain between patches and require almost continuous data transfer. We propose a modified patch dynamics scheme which minimises data transfer by only reevaluating the patch coupling conditions at `mesoscale' time scales which are significantly larger than the microscale time of the microscale problem. We analyse and quantify the error arising from patch dynamics with mesoscale temporal coupling.
Dynamics on Networks: The role of local dynamics and global networks on hypersynchronous neural activity
15:10 Fri 31 Jul, 2015 :: Ingkarni Wardli B21 :: Prof John Terry :: University of Exeter, UK

Media...

Graph theory has evolved into a useful tool for studying complex brain networks inferred from a variety of measures of neural activity, including fMRI, DTI, MEG and EEG. In the study of neurological disorders, recent work has discovered differences in the structure of graphs inferred from patient and control cohorts. However, most of these studies pursue a purely observational approach; identifying correlations between properties of graphs and the cohort which they describe, without consideration of the underlying mechanisms. To move beyond this necessitates the development of mathematical modelling approaches to appropriately interpret network interactions and the alterations in brain dynamics they permit.

In the talk we introduce some of these concepts with application to epilepsy, introducing a dynamic network approach to study resting state EEG recordings from a cohort of 35 people with epilepsy and 40 adult controls. Using this framework we demonstrate a strongly significant difference between networks inferred from the background activity of people with epilepsy in comparison to normal controls. Our findings demonstrate that a mathematical model based analysis of routine clinical EEG provides significant additional information beyond standard clinical interpretation, which may ultimately enable a more appropriate mechanistic stratification of people with epilepsy leading to improved diagnostics and therapeutics.

In vitro models of colorectal cancer: why and how?
15:10 Fri 7 Aug, 2015 :: B19 Ingkarni Wardli :: Dr Tamsin Lannagan :: Gastrointestinal Cancer Biology Group, University of Adelaide / SAHMRI

1 in 20 Australians will develop colorectal cancer (CRC) and it is the second most common cause of cancer death. Similar to many other cancer types, it is the metastases rather than the primary tumour that are lethal, and prognosis is defined by “how far” the tumour has spread at time of diagnosis. Modelling in vivo behavior through rapid and relatively inexpensive in vitro assays would help better target therapies as well as help develop new treatments. One such new in vitro tool is the culture of 3D organoids. Organoids are a biologically stable means of growing, storing and testing treatments against bowel cancer. To this end, we have just set up a human colorectal organoid bank across Australia. This consortium will help us to relate in vitro growth patterns to in vivo behaviour and ultimately in the selection of patients for personalized therapies. Organoid growth, however, is complex. There appears to be variable growth rates and growth patterns. Together with members of the ECMS we recently gained funding to better quantify and model spatial structures in these colorectal organoids. This partnership will aim to directly apply the expertise within the ECMS to patient care.
Modelling terrorism risk - can we predict future trends?
12:10 Mon 10 Aug, 2015 :: Benham Labs G10 :: Stephen Crotty :: University of Adelaide

Media...
As we are all aware, the incidence of terrorism is increasing in the world today. This is confirmed when viewing terrorism events since 1970 as a time series. Can we model this increasing trend and use it to predict terrorism events in the future? Probably not, but we'll give it a go anyway.
Modelling Directionality in Stationary Geophysical Time Series
12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide

Media...
Many time series show directionality inasmuch as plots again-st time and against time-to-go are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with non-Gaussian noise. The alternative is to use non-linear models. We investigate how non-Gaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of one-step-ahead forecast errors from 1901 to 2014.
Modelling Coverage in RNA Sequencing
09:00 Mon 9 Nov, 2015 :: Ingkarni Wardli 5.57 :: Arndt von Haeseler :: Max F Perutz Laboratories, University of Vienna

Media...
RNA sequencing (RNA-seq) is the method of choice for measuring the expression of RNAs in a cell population. In an RNA-seq experiment, sequencing the full length of larger RNA molecules requires fragmentation into smaller pieces to be compatible with limited read lengths of most deep-sequencing technologies. Unfortunately, the issue of non-uniform coverage across a genomic feature has been a concern in RNA-seq and is attributed to preferences for certain fragments in steps of library preparation and sequencing. However, the disparity between the observed non-uniformity of read coverage in RNA-seq data and the assumption of expected uniformity elicits a query on the read coverage profile one should expect across a transcript, if there are no biases in the sequencing protocol. We propose a simple model of unbiased fragmentation where we find that the expected coverage profile is not uniform and, in fact, depends on the ratio of fragment length to transcript length. To compare the non-uniformity proposed by our model with experimental data, we extended this simple model to incorporate empirical attributes matching that of the sequenced transcript in an RNA-seq experiment. In addition, we imposed an experimentally derived distribution on the frequency at which fragment lengths occur.

We used this model to compare our theoretical prediction with experimental data and with the uniform coverage model. If time permits, we will also discuss a potential application of our model.
Weak globularity in homotopy theory and higher category theory
12:10 Thu 12 Nov, 2015 :: Ingkarni Wardli B19 :: Simona Paoli :: University of Leicester

Media...
Spaces and homotopy theories are fundamental objects of study of algebraic topology. One way to study these objects is to break them into smaller components with the Postnikov decomposition. To describe such decomposition purely algebraically we need higher categorical structures. We describe one approach to modelling these structures based on a new paradigm to build weak higher categories, which is the notion of weak globularity. We describe some of their connections to both homotopy theory and higher category theory.
Use of epidemic models in optimal decision making
15:00 Thu 19 Nov, 2015 :: Ingkarni Wardli 5.57 :: Tim Kinyanjui :: School of Mathematics, The University of Manchester

Media...
Epidemic models have proved useful in a number of applications in epidemiology. In this work, I will present two areas that we have used modelling to make informed decisions. Firstly, we have used an age structured mathematical model to describe the transmission of Respiratory Syncytial Virus in a developed country setting and to explore different vaccination strategies. We found that delayed infant vaccination has significant potential in reducing the number of hospitalisations in the most vulnerable group and that most of the reduction is due to indirect protection. It also suggests that marked public health benefit could be achieved through RSV vaccine delivered to age groups not seen as most at risk of severe disease. The second application is in the optimal design of studies aimed at collection of household-stratified infection data. A design decision involves making a trade-off between the number of households to enrol and the sampling frequency. Two commonly used study designs are considered: cross-sectional and cohort. The search for an optimal design uses Bayesian methods to explore the joint parameter-design space combined with Shannon entropy of the posteriors to estimate the amount of information for each design. We found that for the cross-sectional designs, the amount of information increases with the sampling intensity while the cohort design often exhibits a trade-off between the number of households sampled and the intensity of follow-up. Our results broadly support the choices made in existing data collection studies.
Mathematical modelling of the immune response to influenza
15:00 Thu 12 May, 2016 :: Ingkarni Wardli B20 :: Ada Yan :: University of Melbourne

Media...
The immune response plays an important role in the resolution of primary influenza infection and prevention of subsequent infection in an individual. However, the relative roles of each component of the immune response in clearing infection, and the effects of interaction between components, are not well quantified.

We have constructed a model of the immune response to influenza based on data from viral interference experiments, where ferrets were exposed to two influenza strains within a short time period. The changes in viral kinetics of the second virus due to the first virus depend on the strains used as well as the interval between exposures, enabling inference of the timing of innate and adaptive immune response components and the role of cross-reactivity in resolving infection. Our model provides a mechanistic explanation for the observed variation in viruses' abilities to protect against subsequent infection at short inter-exposure intervals, either by delaying the second infection or inducing stochastic extinction of the second virus. It also explains the decrease in recovery time for the second infection when the two strains elicit cross-reactive cellular adaptive immune responses. To account for inter-subject as well as inter-virus variation, the model is formulated using a hierarchical framework. We will fit the model to experimental data using Markov Chain Monte Carlo methods; quantification of the model will enable a deeper understanding of the effects of potential new treatments.
Behavioural Microsimulation Approach to Social Policy and Behavioural Economics
15:10 Fri 20 May, 2016 :: S112 Engineering South :: Dr Drew Mellor :: Ernst & Young

SIMULAIT is a general purpose, behavioural micro-simulation system designed to predict behavioural trends in human populations. This type of predictive capability grew out of original research initially conducted in conjunction with the Defence Science and Technology Group (DSTO) in South Australia, and has been fully commercialised and is in current use by a global customer base. To our customers, the principal value of the system lies in its ability to predict likely outcomes to scenarios that challenge conventional approaches based on extrapolation or generalisation. These types of scenarios include: the impact of disruptive technologies, such as the impact of wide-spread adoption of autonomous vehicles for transportation or batteries for household energy storage; and the impact of effecting policy elements or interventions, such as the impact of imposing water usage restrictions. SIMULAIT employs a multi-disciplinary methodology, drawing from agent-based modelling, behavioural science and psychology, microeconomics, artificial intelligence, simulation, game theory, engineering, mathematics and statistics. In this seminar, we start with a high-level view of the system followed by a look under the hood to see how the various elements come together to answer questions about behavioural trends. The talk will conclude with a case study of a recent application of SIMULAIT to a significant policy problem - how to address the deficiency of STEM skilled teachers in the Victorian teaching workforce.
Student Performance Issues in First Year University Calculus
15:10 Fri 10 Jun, 2016 :: Engineering South S112 :: Dr Christine Mangelsdorf :: University of Melbourne

Media...
MAST10006 Calculus 2 is the largest subject in the School of Mathematics and Statistics at the University of Melbourne, accounting for about 2200 out of 7400 first year enrolments. Despite excellent and consistent feedback from students on lectures, tutorials and teaching materials, scaled failure rates in Calculus 2 averaged an unacceptably high 29.4% (with raw failure rates reaching 40%) by the end of 2014. To understand the issues behind the poor student performance, we studied the exam papers of students with grades of 40-49% over a three-year period. In this presentation, I will present data on areas of poor performance in the final exam, show samples of student work, and identify possible causes for their errors. Many of the performance issues are found to relate to basic weaknesses in the students’ secondary school mathematical skills that inhibit their ability to successfully complete Calculus 2. Since 2015, we have employed a number of approaches to support students’ learning that significantly improved student performance in assessment. I will discuss the changes made to assessment practices and extra support materials provided online and in person, that are driving the improvement.
Approaches to modelling cells and remodelling biological tissues
14:10 Wed 10 Aug, 2016 :: Ingkarni Wardli 5.57 :: Professor Helen Byrne :: University of Oxford

Biological tissues are complex structures, whose evolution is characterised by multiple biophysical processes that act across diverse space and time scales. For example, during normal wound healing, fibroblast cells located around the wound margin exert contractile forces to close the wound while those located in the surrounding tissue synthesise new tissue in response to local growth factors and mechanical stress created by wound contraction. In this talk I will illustrate how mathematical modelling can provide insight into such complex processes, taking my inspiration from recent studies of cell migration, vasculogenesis and wound healing.
Mathematical modelling of social spreading processes
15:10 Fri 19 Aug, 2016 :: Napier G03 :: Prof Hans De Sterck :: Monash University

Media...
Social spreading processes are intriguing manifestations of how humans interact and shape each others' lives. There is great interest in improving our understanding of these processes, and the increasing availability of empirical information in the era of big data and online social networks, combined with mathematical and computational modelling techniques, offer compelling new ways to study these processes. I will first discuss mathematical models for the spread of political revolutions on social networks. The influence of online social networks and social media on the dynamics of the Arab Spring revolutions of 2011 are of particular interest in our work. I will describe a hierarchy of models, starting from agent-based models realized on empirical social networks, and ending up with population-level models that summarize the dynamical behaviour of the spreading process. We seek to understand quantitatively how political revolutions may be facilitated by the modern online social networks of social media. The second part of the talk will describe a population-level model for the social dynamics that cause cigarette smoking to spread in a population. Our model predicts that more individualistic societies will show faster adoption and cessation of smoking. Evidence from a newly composed century-long composite data set on smoking prevalence in 25 countries supports the model, with potential implications for public health interventions around the world. Throughout the talk, I will argue that important aspects of social spreading processes can be revealed and understood via quantitative mathematical and computational models matched to empirical data. This talk describes joint work with John Lang and Danny Abrams.
Modelling evolution of post-menopausal human longevity: The Grandmother Hypothesis
15:10 Fri 2 Sep, 2016 :: Napier G03 :: Dr Peter Kim :: University of Sydney

Media...
Human post-menopausal longevity makes us unique among primates, but how did it evolve? One explanation, the Grandmother Hypothesis, proposes that as grasslands spread in ancient Africa displacing foods ancestral youngsters could effectively exploit, older females whose fertility was declining left more descendants by subsidizing grandchildren and allowing mothers to have new babies sooner. As more robust elders could help more descendants, selection favoured increased longevity while maintaining the ancestral end of female fertility. We develop a probabilistic agent-based model that incorporates two sexes and mating, fertility-longevity tradeoffs, and the possibility of grandmother help. Using this model, we show how the grandmother effect could have driven the evolution of human longevity. Simulations reveal two stable life-histories, one human-like and the other like our nearest cousins, the great apes. The probabilistic formulation shows how stochastic effects can slow down and prevent escape from the ancestral condition, and it allows us to investigate the effect of mutation rates on the trajectory of evolution.
The mystery of colony collapse: Mathematics and honey bee loss
15:10 Fri 16 Sep, 2016 :: Napier G03 :: Prof Mary Myerscough :: University of Sydney

Media...
Honey bees are vital to the production of many foods which need to be pollinated by insects. Yet in many parts of the world honey bee colonies are in decline. A crucial contributor to hive well-being is the health, productivity and longevity of its foragers. When forager numbers are depleted due to stressors in the colony (such as disease or malnutrition) or in the environment (such as pesticides) there is a significant effect, not only on the amount of food (nectar and pollen) that can be collected but also on the colony's capacity to raise brood (eggs, larvae and pupae) to produce new adult bees to replace lost or aged bees. We use a set of differential equation models to explore the effect on the hive of high forager death rates. In particular we examine what happens when bees become foragers at a comparatively young age and how this can lead to a sudden rapid decline of adult bees and the death of the colony.
A principled experimental design approach to big data analysis
15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology

Media...
Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Transmission Dynamics of Visceral Leishmaniasis: designing a test and treat control strategy
12:10 Thu 29 Sep, 2016 :: EM218 :: Graham Medley :: London School of Hygiene & Tropical Medicine

Media...
Visceral Leishmaniasis (VL) is targeted for elimination from the Indian Sub-Continent. Progress has been much better in some areas than others. Current control is based on earlier diagnosis and treatment and on insecticide spraying to reduce the density of the vector. There is a surprising dearth of specific information on the epidemiology of VL, which makes modelling more difficult. In this seminar, I describe a simple framework that gives some insight into the transmission dynamics. We conclude that the majority of infection comes from cases prior to diagnosis. If this is the case then, early diagnosis will be advantageous, but will require a test with high specificity. This is a paradox for many clinicians and public health workers, who tend to prioritise high sensitivity.

Medley, G.F., Hollingsworth, T.D., Olliaro, P.L. & Adams, E.R. (2015) Health-seeking, diagnostics and transmission in the control of visceral leishmaniasis. Nature 528, S102-S108 (3 December 2015), DOI: 10.1038/nature16042
Measuring and mapping carbon dioxide from remote sensing satellite data
15:10 Fri 21 Oct, 2016 :: Napier G03 :: Prof Noel Cressie :: University of Wollongong

Media...
This talk is about environmental statistics for global remote sensing of atmospheric carbon dioxide, a leading greenhouse gas. An important compartment of the carbon cycle is atmospheric carbon dioxide (CO2), where it (and other gases) contribute to climate change through a greenhouse effect. There are a number of CO2 observational programs where measurements are made around the globe at a small number of ground-based locations at somewhat regular time intervals. In contrast, satellite-based programs are spatially global but give up some of the temporal richness. The most recent satellite launched to measure CO2 was NASA's Orbiting Carbon Observatory-2 (OCO-2), whose principal objective is to retrieve a geographical distribution of CO2 sources and sinks. OCO-2's measurement of column-averaged mole fraction, XCO2, is designed to achieve this, through a data-assimilation procedure that is statistical at its basis. Consequently, uncertainty quantification is key, starting with the spectral radiances from an individual sounding to borrowing of strength through spatial-statistical modelling.
Segregation of particles in incompressible flows due to streamline topology and particle-boundary interaction
15:10 Fri 2 Dec, 2016 :: Ingkarni Wardli 5.57 :: Professor Hendrik C. Kuhlmann :: Institute of Fluid Mechanics and Heat Transfer, TU Wien, Vienna, Austria

Media...
The incompressible flow in a number of classical benchmark problems (e.g. lid-driven cavity, liquid bridge) undergoes an instability from a two-dimensional steady to a periodic three-dimensional flow, which is steady or in form of a traveling wave, if the Reynolds number is increased. In the supercritical regime chaotic as well as regular (quasi-periodic) streamlines can coexist for a range of Reynolds numbers. The spatial structures of the regular regions in three-dimensional Navier-Stokes flows has received relatively little attention, partly because of the high numerical effort required for resolving these structures. Particles whose density does not differ much from that of the liquid approximately follow the chaotic or regular streamlines in the bulk. Near the boundaries, however, their trajectories strongly deviate from the streamlines, in particular if the boundary (wall or free surface) is moving tangentially. As a result of this particle-boundary interaction particles can rapidly segregate and be attracted to periodic or quasi-periodic orbits, yielding particle accumulation structures (PAS). The mechanism of PAS will be explained and results from experiments and numerical modelling will be presented to demonstrate the generic character of the phenomenon.
On the fundamental of Rayleigh-Taylor instability and interfacial mixing
15:10 Fri 15 Sep, 2017 :: Ingkarni Wardli B17 :: Prof Snezhana Abarzhi :: University of Western Australia

Rayleigh-Taylor instability (RTI) develops when fluids of different densities are accelerated against their density gradient. Extensive interfacial mixing of the fluids ensues with time. Rayleigh-Taylor (RT) mixing controls a broad variety of processes in fluids, plasmas and materials, in high and low energy density regimes, at astrophysical and atomistic scales. Examples include formation of hot spot in inertial confinement, supernova explosion, stellar and planetary convection, flows in atmosphere and ocean, reactive and supercritical fluids, material transformation under impact and light-material interaction. In some of these cases (e.g. inertial confinement fusion) RT mixing should be tightly mitigated; in some others (e.g. turbulent combustion) it should be strongly enhanced. Understanding the fundamentals of RTI is crucial for achieving a better control of non-equilibrium processes in nature and technology. Traditionally, it was presumed that RTI leads to uncontrolled growth of small-scale imperfections, single-scale nonlinear dynamics, and extensive mixing that is similar to canonical turbulence. The recent success of the theory and experiments in fluids and plasmas suggests an alternative scenario of RTI evolution. It finds that the interface is necessary for RT mixing to accelerate, the acceleration effects are strong enough to suppress the development of turbulence, and the RT dynamics is multi-scale and has significant degree of order. This talk presents a physics-based consideration of fundamentals of RTI and RT mixing, and summarizes what is certain and what is not so certain in our knowledge of RTI. The focus question - How to influence the regularization process in RT mixing? We also discuss new opportunities for improvements of predictive modeling capabilities, physical description, and control of RT mixing in fluids, plasmas and materials.
How oligomerisation impacts steady state gradient in a morphogen-receptor system
15:10 Fri 20 Oct, 2017 :: Ingkarni Wardli 5.57 :: Mr Phillip Brown :: University of Adelaide

In developmental biology an important process is cell fate determination, where cells start to differentiate their form and function. This is an element of the broader concept of morphogenesis. It has long been held that cell differentiation can occur by a chemical signal providing positional information to 'undecided' cells. This chemical produces a gradient of concentration that indicates to a cell what path it should develop along. More recently it has been shown that in a particular system of this type, the chemical (protein) does not exist purely as individual molecules, but can exist in multi-protein complexes known as oligomers. Mathematical modelling has been performed on systems of oligomers to determine if this concept can produce useful gradients of concentration. However, there are wide range of possibilities when it comes to how oligomer systems can be modelled and most of them have not been explored. In this talk I will introduce a new monomer system and analyse it, before extending this model to include oligomers. A number of oligomer models are proposed based on the assumption that proteins are only produced in their oligomer form and can only break apart once they have left the producing cell. It will be shown that when oligomers are present under these conditions, but only monomers are permitted to bind with receptors, then the system can produce robust, biologically useful gradients for a significantly larger range of model parameters (for instance, degradation, production and binding rates) compared to the monomer system. We will also show that when oligomers are permitted to bind with receptors there is negligible difference compared to the monomer system.
The Markovian binary tree applied to demography and conservation biology
15:10 Fri 27 Oct, 2017 :: Ingkarni Wardli B17 :: Dr Sophie Hautphenne :: University of Melbourne

Markovian binary trees form a general and tractable class of continuous-time branching processes, which makes them well-suited for real-world applications. Thanks to their appealing probabilistic and computational features, these processes have proven to be an excellent modelling tool for applications in population biology. Typical performance measures of these models include the extinction probability of a population, the distribution of the population size at a given time, the total progeny size until extinction, and the asymptotic population composition. Besides giving an overview of the main performance measures and the techniques involved to compute them, we discuss recently developed statistical methods to estimate the model parameters, depending on the accuracy of the available data. We illustrate our results in human demography and in conservation biology.
Stochastic Modelling of Urban Structure
11:10 Mon 20 Nov, 2017 :: Engineering Nth N132 :: Mark Girolami :: Imperial College London, and The Alan Turing Institute

Media...
Urban systems are complex in nature and comprise of a large number of individuals that act according to utility, a measure of net benefit pertaining to preferences. The actions of individuals give rise to an emergent behaviour, creating the so-called urban structure that we observe. In this talk, I develop a stochastic model of urban structure to formally account for uncertainty arising from the complex behaviour. We further use this stochastic model to infer the components of a utility function from observed urban structure. This is a more powerful modelling framework in comparison to the ubiquitous discrete choice models that are of limited use for complex systems, in which the overall preferences of individuals are difficult to ascertain. We model urban structure as a realization of a Boltzmann distribution that is the invariant distribution of a related stochastic differential equation (SDE) that describes the dynamics of the urban system. Our specification of Boltzmann distribution assigns higher probability to stable configurations, in the sense that consumer surplus (demand) is balanced with running costs (supply), as characterized by a potential function. We specify a Bayesian hierarchical model to infer the components of a utility function from observed structure. Our model is doubly-intractable and poses significant computational challenges that we overcome using recent advances in Markov chain Monte Carlo (MCMC) methods. We demonstrate our methodology with case studies on the London retail system and airports in England.
Calculating optimal limits for transacting credit card customers
15:10 Fri 2 Mar, 2018 :: Horace Lamb 1022 :: Prof Peter Taylor :: University of Melbourne

Credit card users can roughly be divided into `transactors', who pay off their balance each month, and `revolvers', who maintain an outstanding balance, on which they pay substantial interest. In this talk, we focus on modelling the behaviour of an individual transactor customer. Our motivation is to calculate an optimal credit limit from the bank's point of view. This requires an expression for the expected outstanding balance at the end of a payment period. We establish a connection with the classical newsvendor model. Furthermore, we derive the Laplace transform of the outstanding balance, assuming that purchases are made according to a marked point process and that there is a simplified balance control policy which prevents all purchases in the rest of the payment period when the credit limit is exceeded. We then use the newsvendor model and our modified model to calculate bounds on the optimal credit limit for the more realistic balance control policy that accepts all purchases that do not exceed the limit. We illustrate our analysis using a compound Poisson process example and show that the optimal limit scales with the distribution of the purchasing process, while the probability of exceeding the optimal limit remains constant. Finally, we apply our model to some real credit card purchase data.
Models, machine learning, and robotics: understanding biological networks
15:10 Fri 16 Mar, 2018 :: Horace Lamb 1022 :: Prof Steve Oliver :: University of Cambridge

The availability of complete genome sequences has enabled the construction of computer models of metabolic networks that may be used to predict the impact of genetic mutations on growth and survival. Both logical and constraint-based models of the metabolic network of the model eukaryote, the ale yeast Saccharomyces cerevisiae, have been available for some time and are continually being improved by the research community. While such models are very successful at predicting the impact of deleting single genes, the prediction of the impact of higher order genetic interactions is a greater challenge. Initial studies of limited gene sets provided encouraging results. However, the availability of comprehensive experimental data for the interactions between genes involved in metabolism demonstrated that, while the models were able to predict the general properties of the genetic interaction network, their ability to predict interactions between specific pairs of metabolic genes was poor. I will examine the reasons for this poor performance and demonstrate ways of improving the accuracy of the models by exploiting the techniques of machine learning and robotics. The utility of these metabolic models rests on the firm foundations of genome sequencing data. However, there are two major problems with these kinds of network models - there is no dynamics, and they do not deal with the uncertain and incomplete nature of much biological data. To deal with these problems, we have developed the Flexible Nets (FNs) modelling formalism. FNs were inspired by Petri Nets and can deal with missing or uncertain data, incorporate both dynamics and regulation, and also have the potential for model predictive control of biotechnological processes.
Modelling phagocytosis
15:10 Fri 25 May, 2018 :: Horace Lamb 1022 :: Prof Ngamta (Natalie) Thamwattana :: University of Wollongong

Phagocytosis refers to a process in which one cell type fully encloses and consumes unwanted cells, debris or particulate matter. It plays an important role in immune systems through the destruction of pathogens and the inhibiting of cancerous cells. In this study, we combine models on cell-cell adhesion and on predator-prey modelling to generate a new model for phagocytosis that is capable of relating the interaction between cells in both space and time. Numerical results are presented, demonstrating the behaviours of cells during the process of phagocytosis.
Topological Data Analysis
15:10 Fri 31 Aug, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University

Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape - the way the data are distributed - permits high-level reasoning and modelling of the processes that created this data. The 0-th order aspect of shape is the number pieces: "connected components" to a topologist; "clustering" to a statistician. Higher-order topological aspects of shape are holes, quantified as "non-bounding cycles" in homology theory. These signal the existence of some type of constraint on the data-generating process. Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of "noisy" topological structure. Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection.
Mathematical modelling of the emergence and spread of antimalarial drug resistance
15:10 Fri 14 Sep, 2018 :: Napier 208 :: Dr Jennifer Flegg :: University of Melbourne

Malaria parasites have repeatedly evolved resistance to antimalarial drugs, thwarting efforts to eliminate the disease and contributing to an increase in mortality. In this talk, I will introduce several statistical and mathematical models for monitoring the emergence and spread of antimalarial drug resistance. For example, results will be presented from Bayesian geostatistical models that have quantified the space-time trends in drug resistance in Africa and Southeast Asia. I will discuss how the results of these models have been used to update public health policy.
Some advances in the formulation of analytical methods for linear and nonlinear dynamics
15:10 Tue 20 Nov, 2018 :: EMG07 :: Dr Vladislav Sorokin :: University of Auckland

In the modern engineering, it is often necessary to solve problems involving strong parametric excitation and (or) strong nonlinearity. Dynamics of micro- and nanoscale electro-mechanical systems, wave propagation in structures made of corrugated composite materials are just examples of those. Numerical methods, although able to predict systems behavior for specific sets of parameters, fail to provide an insight into underlying physics. On the other hand, conventional analytical methods impose severe restrictions on the problem parameters space and (or) on types of the solutions. Thus, the quest for advanced tools to deal with linear and nonlinear structural dynamics still continues, and the lecture is concerned with an advanced formulation of an analytical method. The principal novelty aspect is that the presence of a small parameter in governing equations is not requested, so that dynamic problems involving strong parametric excitation and (or) strong nonlinearity can be considered. Another advantage of the method is that it is free from conventional restrictions on the excitation frequency spectrum and applicable for problems involving combined multiple parametric and (or) direct excitations with incommensurate frequencies, essential for some applications. A use of the method will be illustrated in several examples, including analysis of the effects of corrugation shapes on dispersion relation and frequency band-gaps of structures and dynamics of nonlinear parametric amplifiers.
The role of microenvironment in regulation of cell infiltration and bortezomib-OV therapy in glioblastoma
15:10 Fri 11 Jan, 2019 :: IW 5.57 :: Professor Yangjin Kim :: Konkuk University, South Korea

Tumor microenvironment (TME) plays a critical role in regulation of tumor cell invasion in glioblastoma. Many microenvironmental factors such as extracllular matrix, microglia and astrocytes can either block or enhance this critical infiltration step in brain [4]. Oncolytic viruses such as herpes simplex virus-1 (oHSV) are genetically modified to target and kill cancer cells while not harming healthy normal cells and are currently under multiple clinical trials for safety and efficacy [1]. Bortezomib is a peptide-based proteasome inhibitor and is an FDA-approved drug for myeloma and mantle cell lymphoma. Yoo et al (2) have previously demonstrated that bortezomibinduced unfolded protein response (UPR) in many tumor cell lines (glioma, ovarian, and head and neck) up-regulated expression of heat shock protein 90 (HSP90), which then enhanced viral replication through promotion of nuclear localization of the viral polymerase in vitro. This led to synergistic tumor cell killing in vitro, and a combination treatment of mice with oHSV and bortezomib showed improved anti-tumor efficacy in vivo [2]. This combination therapy also increased the surface expression levels of NK cell activating markers and enhanced pro-inflammatory cytokine secretion. These findings demonstrated that the synergistic interaction between oHSV and bortezomib, a clinically relevant proteasome inhibitor, augments the cancer cell killing and promotes overall therapeutic efficacy. We investigated the role of NK cells in combination therapy with oncolytic virus (OV) and bortezomib. NK cells display rapid and potent immunity to metastasis and hematological cancers, and they overcome immunosuppressive effects of tumor microenvironment. We developed a mathematical model, a system of PDEs, in order to address the question of how the density of NK cells affects the growth of the tumor [3]. We found that the anti-tumor efficacy increases when the endogenous NKs are depleted, and also when exogenous NK cells are injected into the tumor. We also show that the TME plays a significant role in anti-tumor efficacy in OV combination therapy, and illustrate the effect of different spatial patterns of OV injection [5]. The results illustrate a possible phenotypic switch within tumor populations in a given microenvironment, and suggest new anti-invasion therapies. These predictions were validated by our in vivo and in vitro experiments. References 1]  Kanai R, … Rabkin SD, “Oncolytic herpes simplex virus vectors and chemotherapy: are combinatorial strategies more effective for cancer?”, Future Oncology, 6(4), 619–634, 2010. 
 [2]  Yoo J, et al., “Bortezomib-induced unfolded protein response increases oncolytic hsv-1 replication resulting in synergistic antitumor effect”, Clin Cancer Res , Vol. 20(14), 2014, pp. 3787-3798. 
 [3]  Yangjin Kim,..Balveen Kaur and Avner Friedman, “Complex role of NK cells in regulation of oncolytic virus-bortezomib therapy”, PNAS, 115 (19), pp. 4927-4932, 2018. 
 [4] Yangjin Kim, ..Sean Lawler, and Mark Chaplain, “Role of extracellular matrix and microenvironment in regulation of tumor growth and LAR-mediated invasion in glioblastoma”, PLoS One, 13(10):e0204865, 2018. 
 [5] Yangjin Kim, …, Hans G. Othmer, “Synergistic effects of bortezomib-OV therapy and anti-invasive
strategies in glioblastoma: A mathematical model”, Special issue, submitted, 2018.

News matching "Influences on lobster (Jasus edwardsii) catch rate"

Success in Learning and Teaching Grants
The School of Mathematical Sciences has been awarded two Faculty L&T awards. Congratulations to Dr David Green for his successful grant "One Simulation Modelling Instruction Module" and to Drs Adrian Koerber, Paul McCann and Jim Denier for their successful grant "Graphics Calculators and beyond". Posted Tue 11 Mar 08.
Welcome to Dr Joshua Ross
We welcome Dr Joshua Ross as a new lecturer in the School of Mathematical Sciences. Joshua has moved over to Adelaide from the University of Cambridge. His research interests are mathematical modelling (especially mathematical biology) and operations research. Posted Mon 15 Mar 10.

More information...

ARC Grant successes
The School of Mathematical Sciences has again had outstanding success in the ARC Discovery and Linkage Projects schemes. Congratulations to the following staff for their success in the Discovery Project scheme: Prof Nigel Bean, Dr Josh Ross, Prof Phil Pollett, Prof Peter Taylor, New methods for improving active adaptive management in biological systems, $255,000 over 3 years; Dr Josh Ross, New methods for integrating population structure and stochasticity into models of disease dynamics, $248,000 over three years; A/Prof Matt Roughan, Dr Walter Willinger, Internet traffic-matrix synthesis, $290,000 over three years; Prof Patricia Solomon, A/Prof John Moran, Statistical methods for the analysis of critical care data, with application to the Australian and New Zealand Intensive Care Database, $310,000 over 3 years; Prof Mathai Varghese, Prof Peter Bouwknegt, Supersymmetric quantum field theory, topology and duality, $375,000 over 3 years; Prof Peter Taylor, Prof Nigel Bean, Dr Sophie Hautphenne, Dr Mark Fackrell, Dr Malgorzata O'Reilly, Prof Guy Latouche, Advanced matrix-analytic methods with applications, $600,000 over 3 years. Congratulations to the following staff for their success in the Linkage Project scheme: Prof Simon Beecham, Prof Lee White, A/Prof John Boland, Prof Phil Howlett, Dr Yvonne Stokes, Mr John Wells, Paving the way: an experimental approach to the mathematical modelling and design of permeable pavements, $370,000 over 3 years; Dr Amie Albrecht, Prof Phil Howlett, Dr Andrew Metcalfe, Dr Peter Pudney, Prof Roderick Smith, Saving energy on trains - demonstration, evaluation, integration, $540,000 over 3 years Posted Fri 29 Oct 10.
Bushfire CRC post-graduate scholarship success
Congratulations to Mika Peace who has been awarded a PhD scholarship from the Bushfire Cooperative Research Centre. Mika is working with Trent Mattner and Graham Mills (from the Bureau of Meteorology) on coupled fire-weather modelling Posted Wed 6 Apr 11.
ARC Future Fellowship success
Associate Professor Zudi Lu has been awarded an ARC Future Fellowship. Associate Professor Lu, and Associate Professor in Statistics, will use the support provided by his Future Fellowship to further improve the theory and practice of econometric modelling of nonlinear spatial time series. Congratulations Zudi. Posted Thu 12 May 11.
ARC Grant Success
Congratulations to the following staff who were successful in securing funding from the Australian Research Council Discovery Projects Scheme. Associate Professor Finnur Larusson awarded $270,000 for his project Flexibility and symmetry in complex geometry; Dr Thomas Leistner, awarded $303,464 for his project Holonomy groups in Lorentzian geometry, Professor Michael Murray Murray and Dr Daniel Stevenson (Glasgow), awarded $270,000 for their project Bundle gerbes: generalisations and applications; Professor Mathai Varghese, awarded $105,000 for his project Advances in index theory and Prof Anthony Roberts and Professor Ioannis Kevrekidis (Princeton) awarded $330,000 for their project Accurate modelling of large multiscale dynamical systems for engineering and scientific simulation and analysis Posted Tue 8 Nov 11.
Best paper prize at Membrane Symposium
Congratulations to Wei Xian Lim who was awarded the prize for the best student presentation at the Membrane Society of Australasia 2011 ECR Membrane Symposium for her talk on "Mathematical modelling of gas capture in porous materials". Xian is working on her PhD with Jim Hill and Barry Cox. Posted Mon 28 Nov 11.
Top-up scholarship available
A PhD opportunity is available to help in mathematical modelling of the interaction of ocean waves and sea ice. For information, see this advertisement. Posted Thu 1 Nov 12.
A/Prof Joshua Ross, 2017 Moran Medal recipient
Congratulations to Associate Professor Joshua Ross who has won the 2017 Moran Medal, awarded by the Australian Academy of Science. The Moran Medal recognises outstanding research by scientists up to 10 years post-PhD in applied probability, biometrics, mathematical genetics, psychometrics and statistics. Associate Professor Ross has made influential contributions to public health and conservation biology using mathematical modelling and statistics to help in decision making. Posted Fri 23 Dec 16.

More information...

Publications matching "Influences on lobster (Jasus edwardsii) catch rate"

Publications
Modelling Water Blending-Sensitivity of Optimal Policies
Webby, Roger; Green, David; Metcalfe, Andrew, 17th Biennial Congress on Modeling and Simulation, New Zealand 01/12/08
Stochastic cyclone modelling in the Bay of Bengal
Need, Steven; Lambert, Martin; Metcalfe, Andrew; Sen, D, Water Down Under 2008, Adelaide 14/04/08
Evolving gene frequencies in a population with three possible alleles at a locus
Hajek, Bronwyn; Broadbridge, P; Williams, G, Mathematical and Computer Modelling 47 (210–217) 2008
Mathematical modeling as an accurate predictive tool in capillary and microstructured fiber manufacture: The effects of preform rotation
Voyce, Christopher; Fitt, A; Monro, Tanya, Journal of Lightwave Technology 26 (791–798) 2008
Modelling survival in acute severe illness: Cox versus accelerated failure time models
Moran, John; Bersten, A; Solomon, Patricia; Edibam, C; Hunt, T, Journal of Evaluation in Clinical Practice 14 (83–93) 2008
The decoupling and solution of logistic and classical two-species lotka-volterra dynamics with variable production rates
Pearce, Charles; Leipnik, R, Biophysical Reviews and Letters 3 (183–194) 2008
The mathematical modelling of rotating capillary tubes for holey-fibre manufacture
Voyce, Christopher; Fitt, A; Monro, Tanya, Journal of Engineering Mathematics 60 (69–87) 2008
The decoupling & solution of logistic & classical two-species lotka-volterra dynamics with variable production rates
Pearce, Charles; Leipnik, R, Biomat 2007, Brazil 24/11/08
Computer algebra derives discretisations via self-adjoint multiscale modelling (Unpublished)
Roberts, Anthony John,
The Term Structure of Interest Rates in a Hidden Markov Setting
Elliott, Robert; Wilson, C, chapter in Hidden Markov Models in Finance (Vieweg, Springer Science+Business Media) 15–30, 2007
Inverse groundwater modelling in the Willunga Basin, South Australia
Knowles, I; Teubner, Michael; Yan, A; Rasser, Paul; Lee, Jong, Hydrogeology Journal 15 (1107–1118) 2007
The Mekong-applications of value at risk (VAR) and conditional value at risk (CVAR) simulation to the benefits, costs and consequences of water resources development in a large river basin
Webby, Roger; Adamson, Peter; Boland, J; Howlett, P; Metcalfe, Andrew; Piantadosi, J, Ecological Modelling 201 (89–96) 2007
El Nino effects and upwelling off South Australia
Middleton, Susan; Middleton, J; Van Ruth, Paul; Ward, Timothy; Arthur, C; McClean, J; Maltrud, M; Gill, P; Levings, A, Journal of Physical Oceanography 37 (2458–2477) 2007
Modelling extreme rainfall and tidal anomaly
Need, Steven; Lambert, Martin; Metcalfe, Andrew, 30th Hydrology and Water Resources Symposium, Launceston, Tasmania 04/12/06
Modelling multivariate extreme flood events
Wong, Hui; Need, Steven; Adamson, Peter; Lambert, Martin; Metcalfe, Andrew, 30th Hydrology and Water Resources Symposium, Launceston, Tasmania 04/12/06
Mathematical modelling of oxygen concentration in bovine and murine cumulus-oocyte complexes
Clark, Alys; Stokes, Yvonne; Lane, Michelle; Thompson, Jeremy, Reproduction 131 (999–1006) 2006
An analytic modelling approach for network routing algorithms that use "ant-like" mobile agents
Bean, Nigel; Costa, Andre, Computer Networks-The International Journal of Computer and Telecommunications Networking 49 (243–268) 2005
An inverse modelling technique for glass forming by gravity sagging
Agnon, Y; Stokes, Yvonne, European Journal of Mechanics B-Fluids 24 (275–287) 2005
Deterministic and stochastic modelling of endosome escape by Staphylococcus aureus: "quorum" sensing by a single bacterium
Koerber, Adrian; King, J; Williams, P, Journal of Mathematical Biology 50 (440–488) 2005
Investigation and modelling of traffic issues in immersive audio environments
McMahon, Jeremy; Rumsewicz, Michael; Boustead, P; Safaei, F, 2004 IEEE International Conference on Communications, Paris, France 20/06/04
Modelling thirty-day mortality in the acute respiratory distress syndrome (ARDS) in an adult ICU
Moran, John; Solomon, Patricia; Fox, V; Salagaras, M; Williams, P; Quinlan, K; Bersten, A, Anaesthesia and Intensive Care 32 (317–329) 2004
Reactions to genetically modified food crops and how perception of risks and benefits influences consumers' information gathering
Wilson, Carlene; Evans, G; Leppard, Phillip; Syrette, J, Risk Analysis 24 (1311–1321) 2004
Reynolds number effects in a simple planetary mixer
Clifford, M; Cox, Stephen; Finn, Matthew, Chemical Engineering Science 59 (3371–3379) 2004
The effects of social networks on disability in older Australians
Giles, Lynne Catherine; Metcalf, P; Glonek, Garique; Luszcz, M; Andrews, G, Journal of Aging and Health 16 (517–538) 2004
Development of Non-Homogeneous and Hierarchical Hidden Markov Models for Modelling Monthly Rainfall and Streamflow Time Series
Whiting, Julian; Lambert, Martin; Metcalfe, Andrew; Kuczera, George, World Water and Environmental Resources Congress (2004), Salt Lake City, Utah, USA 27/06/04
Stochastic modelling of tidal anomaly for estimation of flood risk in coastal areas
Ahmer, Ingrid; Lambert, Martin; Leonard, Michael; Metcalfe, Andrew, 28th International Hydrology and Water Resources Symposium, Wollongong, NSW, Australia 10/11/03
A Probabilistic algorithm for determining the fundamental matrix of a block M/G/1 Markov chain
Hunt, Emma, Mathematical and Computer Modelling 38 (1203–1209) 2003
A philosophy for the modelling of realistic nonlinear systems
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Proceedings of the American Mathematical Society 132 (353–363) 2003
An approximate formula for the stress intensity factor for the pressurized star crack
Clements, David; Widana, Inyoman, Mathematical and Computer Modelling 37 (689–694) 2003
Method of hybrid approximations for modelling of multidimensional nonlinear systems
Torokhti, Anatoli; Howlett, P; Pearce, Charles, Multidimensional Systems and Signal Processing 14 (397–410) 2003
Modelling persistence in annual Australian point rainfall
Whiting, Julian; Lambert, Martin; Metcalfe, Andrew, Hydrology and Earth System Sciences 7 (197–211) 2003
Optimal mathematical models for nonlinear dynamical systems
Torokhti, Anatoli; Howlett, P; Pearce, Charles, Mathematical and Computer Modelling of Dynamical Systems 9 (327–343) 2003
Rumours, epidemics, and processes of mass action: Synthesis and analysis
Dickinson, Rowland; Pearce, Charles, Mathematical and Computer Modelling 38 (1157–1167) 2003
Low-dimensional modelling of dynamical systems applied to some dissipative fluid mechanics
Roberts, Anthony John, chapter in Nonlinear dynamics: from lasers to butterflies (World Scientific Publishing) 257–313, 2003
Modelling host tissue degradation by extracellular bacterial pathogens
King, J; Koerber, Adrian; Croft, J; Ward, J; Williams, P; Sockett, R, Mathematical Medicine and Biology (Print Edition) 20 (227–260) 2003
Modelling nonlinear dynamics of shape-memory-alloys with approximate models of coupled thermoelasticity
Melnik, R; Roberts, Anthony John, Zeitschrift fur Angewandte Mathematik und Mechanik 83 (93–104) 2003
Modelling the dynamics of turbulent floods
Mei, Z; Roberts, Anthony John; Li, Z, Siam Journal on Applied Mathematics 63 (423–458) 2003
Mortality and other event rates: what do they tell us about performance?
Moran, John; Solomon, Patricia, Critical care and Resuscitation 5 (292–304) 2003
Coastal flood modelling: Allowing for dependence between rainfall and tidal anomaly
Ahmer, Ingrid; Metcalfe, Andrew; Lambert, Martin; Deans, J, EMAC 2002, Brisbane, Australia 29/09/02
Decay rates of discrete phase-type distributions with infinitely-many phases
Bean, Nigel; Nielsen, B, Matrix-Analytic Methods Theory and Applications, Adelaide, Australia 14/07/02
A mathematical study of peristaltic transport of a Casson fluid
Mernone, Anacleto; Mazumdar, Jagan; Lucas, S, Mathematical and Computer Modelling 35 (895–912) 2002
Bivariate stochastic modelling of ephemeral streamflow
Cigizoglu, H; Adamson, Peter; Metcalfe, Andrew, Hydrological Processes 16 (1451–1465) 2002
Fractional Brownian motion and financial modelling
Elliott, Robert; Van Der Hoek, John, chapter in Mathematical Finance (Birkhauser) 140–151, 2001
Statistical modelling and prediction associated with the HIV/AIDS epidemic
Solomon, Patricia; Wilson, Susan, The Mathematical Scientist 26 (87–102) 2001
The modelling and numerical simulation of causal non-linear systems
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Nonlinear Analysis-Theory Methods & Applications 47 (5559–5572) 2001
Modelling Overflow Traffic from Terrestrial Networks into Satellite Networks
Green, David, 8th International Conference on Telecommunications (June 2001), Bucharest, Romania 04/06/01
Modelling Service Time Distribution in Cellular Networks Using Phase-Type Service Distributions
Green, David; Asenstorfer, J; Jayasuriya, A,
Mathematical modelling of quorum sensing in bacteria
Ward, J; King, J; Koerber, Adrian; Williams, P; Croft, J; Sockett, R, Mathematical Medicine and Biology (Print Edition) 18 (263–292) 2001
A brief survey and synthesis of the roles of time in petri nets
Bowden, Fred David John, Mathematical and Computer Modelling 31 (55–68) 2000
A new perspective on the normalization of invariant measures for loss networks and other product form systems
Bean, Nigel; Stewart, Mark, Mathematical and Computer Modelling 31 (47–54) 2000
Algorithms for second moments in batch-movement queueing systems
Hunt, Emma, Mathematical and Computer Modelling 31 (299–305) 2000
Biomathematical modelling of physiological fluids using a Casson fluid with emphasis to peristalsis
Mernone, Anacleto; Mazumdar, Jagan, Australasian Physical and Engineering Sciences in Medicine 23 (94–100) 2000
Disease surveillance and data collection issues in epidemic modelling
Solomon, Patricia; Isham, V, Statistical Methods in Medical Research 9 (259–277) 2000
Maximal profit dimensioning and tariffing of loss networks with cross-connects
Bean, Nigel; Brown, Deborah; Taylor, Peter, Mathematical and Computer Modelling 31 (21–30) 2000
Quasi-reversibility and networks of queues with nonstandard batch movements
Taylor, Peter, Mathematical and Computer Modelling 31 (335–341) 2000
The exact solution of the general stochastic rumour
Pearce, Charles, Mathematical and Computer Modelling 31 (289–298) 2000
The positive inotropic effects of milrinone but not of digoxin are attenuated at short cycle lengths
Zeitz, Christopher; Ritchie, Rebecca; Jarrett, Richard; Hii, John; Wuttke, R; Horowitz, John, Journal of Cardiovascular Pharmacology 35 (427–433) 2000
When is a MAP poisson?
Bean, Nigel; Green, David, Mathematical and Computer Modelling 31 (31–46) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.