The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
October 2018

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

Events matching "Bilinear L^p estimates for quasimodes"

Watching evolution in real time; problems and potential research areas.
15:10 Fri 26 May, 2006 :: G08. Mathematics Building University of Adelaide :: Prof Alan Cooper (Federation Fellow)

Recent studies (1) have indicated problems with our ability to use the genetic distances between species to estimate the time since their divergence (so called molecular clocks). An exponential decay curve has been detected in comparisons of closely related taxa in mammal and bird groups, and rough approximations suggest that molecular clock calculations may be problematic for the recent past (eg <1 million years). Unfortunately, this period encompasses a number of key evolutionary events where estimates of timing are critical such as modern human evolutionary history, the domestication of animals and plants, and most issues involved in conservation biology. A solution (formulated at UA) will be briefly outlined. A second area of active interest is the recent suggestion (2) that mitochondrial DNA diversity does not track population size in several groups, in contrast to standard thinking. This finding has been interpreted as showing that mtDNA may not be evolving neutrally, as has long been assumed.
Large ancient DNA datasets provide a means to examine these issues, by revealing evolutionary processes in real time (3). The data also provide a rich area for mathematical investigation as temporal information provides information about several parameters that are unknown in serial coalescent calculations (4).
  1. Ho SYW et al. Time dependency of molecular rate estimates and systematic overestimation of recent divergence times. Mol. Biol. Evol. 22, 1561-1568 (2005);
    Penny D, Nature 436, 183-184 (2005).
  2. Bazin E., et al. Population size does not influence mitochondrial genetic diversity in animals. Science 312, 570 (2006);
    Eyre-Walker A. Size does not matter for mitochondrial DNA, Science 312, 537 (2006).
  3. Shapiro B, et al. Rise and fall of the Beringian steppe bison. Science 306: 1561-1565 (2004);
    Chan et al. Bayesian estimation of the timing and severity of a population bottleneck from ancient DNA. PLoS Genetics, 2 e59 (2006).
  4. Drummond et al. Measurably evolving populations, Trends in Ecol. Evol. 18, 481-488 (2003);
    Drummond et al. Bayesian coalescent inference of past population dynamics from molecular sequences. Molecular Biology Evolution 22, 1185-92 (2005).
A Bivariate Zero-inflated Poisson Regression Model and application to some Dental Epidemiological data
14:10 Fri 27 Oct, 2006 :: G08 Mathematics Building University of Adelaide :: University Prof Sudhir Paul

Data in the form of paired (pre-treatment, post-treatment) counts arise in the study of the effects of several treatments after accounting for possible covariate effects. An example of such a data set comes from a dental epidemiological study in Belo Horizonte (the Belo Horizonte caries prevention study) which evaluated various programmes for reducing caries. Also, these data may show extra pairs of zeros than can be accounted for by a simpler model, such as, a bivariate Poisson regression model. In such situations we propose to use a zero-inflated bivariate Poisson regression (ZIBPR) model for the paired (pre-treatment, posttreatment) count data. We develop EM algorithm to obtain maximum likelihood estimates of the parameters of the ZIBPR model. Further, we obtain exact Fisher information matrix of the maximum likelihood estimates of the parameters of the ZIBPR model and develop a procedure for testing treatment effects. The procedure to detect treatment effects based on the ZIBPR model is compared, in terms of size, by simulations, with an earlier procedure using a zero-inflated Poisson regression (ZIPR) model of the post-treatment count with the pre-treatment count treated as a covariate. The procedure based on the ZIBPR model holds level most effectively. A further simulation study indicates good power property of the procedure based on the ZIBPR model. We then compare our analysis, of the decayed, missing and filled teeth (DMFT) index data from the caries prevention study, based on the ZIBPR model with the analysis using a zero-inflated Poisson regression model in which the pre-treatment DMFT index is taken to be a covariate
An Introduction to invariant differential pairings
14:10 Tue 24 Jul, 2007 :: Mathematics G08 :: Jens Kroeske

On homogeneous spaces G/P, where G is a semi-simple Lie group and P is a parabolic subgroup (the ordinary sphere or projective spaces being examples), invariant operators, that is operators between certain homogeneous bundles (functions, vector fields or forms being amongst the typical examples) that are invariant under the action of the group G, have been studied extensively. Especially on so called hermitian symmetric spaces which arise through a 1-grading of the Lie algebra of G there exists a complete classification of first order invariant linear differential operators even on more general manifolds (that allow a so called almost hermitian structure).

This talk will introduce the notion of an invariant bilinear differential pairing between sections of the aforementioned homogeneous bundles. Moreover we will discuss a classification (excluding certain totally degenerate cases) of all first order invariant bilinear differential pairings on manifolds with an almost hermitian symmetric structure. The similarities and connections with the linear operator classification will be highlighted and discussed.

Global and Local stationary modelling in finance: Theory and empirical evidence
14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 Pantheon-Sorbonne

To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.

Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.

Now non-stationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This non-stationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).

Thus, using stationary unconditional moments suggest a global stationarity for the model, but using non-stationary unconditional moments or non-stationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.

The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.

1. What kinds of non-stationarity affect the major financial and economic data sets? How to detect them?

2. Local and global stationarities: How are they defined?

3. What is the impact of evidence of non-stationarity on the statistics computed from the global non stationary data sets?

4. How can we analyze data sets in the non-stationary global framework? Does the asymptotic theory work in non-stationary framework?

5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?

These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.

Comparison of Spectral and Wavelet Estimation of the Dynamic Linear System of a Wade Energy Device
12:10 Mon 2 May, 2011 :: 5.57 Ingkarni Wardli :: Mohd Aftar :: University of Adelaide

Renewable energy has been one of the main issues nowadays. The implications of fossil energy and nuclear energy along with its limited source have triggered researchers and industries to find another source of renewable energy for example hydro energy, wind energy and also wave energy. In this seminar, I will talk about the spectral estimation and wavelet estimation of a linear dynamical system of motion for a heaving buoy wave energy device. The spectral estimates was based on the Fourier transform, while the wavelet estimate was based on the wavelet transform. Comparisons between two spectral estimates with a wavelet estimate of the amplitude response operator(ARO) for the dynamical system of the wave energy device shows that the wavelet estimate ARO is much better for data with and without noise.
Statistical challenges in molecular phylogenetics
15:10 Fri 20 May, 2011 :: Mawson Lab G19 lecture theatre :: Dr Barbara Holland :: University of Tasmania

This talk will give an introduction to the ways that mathematics and statistics gets used in the inference of evolutionary (phylogenetic) trees. Taking a model-based approach to estimating the relationships between species has proven to be an enormously effective, however, there are some tricky statistical challenges that remain. The increasingly plentiful amount of DNA sequence data is a boon, but it is also throwing a spotlight on some of the shortcomings of current best practice particularly in how we (1) assess the reliability of our phylogenetic estimates, and (2) how we choose appropriate models. This talk will aim to give a general introduction this area of research and will also highlight some results from two of my recent PhD students.
Permeability of heterogeneous porous media - experiments, mathematics and computations
15:10 Fri 27 May, 2011 :: B.21 Ingkarni Wardli :: Prof Patrick Selvadurai :: Department of Civil Engineering and Applied Mechanics, McGill University

Permeability is a key parameter important to a variety of applications in geological engineering and in the environmental geosciences. The conventional definition of Darcy flow enables the estimation of permeability at different levels of detail. This lecture will focus on the measurement of surface permeability characteristics of a large cuboidal block of Indiana Limestone, using a surface permeameter. The paper discusses the theoretical developments, the solution of the resulting triple integral equations and associated computational treatments that enable the mapping of the near surface permeability of the cuboidal region. This data combined with a kriging procedure is used to develop results for the permeability distribution at the interior of the cuboidal region. Upon verification of the absence of dominant pathways for fluid flow through the cuboidal region, estimates are obtained for the "Effective Permeability" of the cuboid using estimates proposed by Wiener, Landau and Lifschitz, King, Matheron, Journel et al., Dagan and others. The results of these estimates are compared with the geometric mean, derived form the computational estimates.
Optimal experimental design for stochastic population models
15:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Dan Pagendam :: CSIRO, Brisbane

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models effectively can be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for Fisher's information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore the improvements in experimental efficiency when using the optimal design over some simpler designs, such as the design where observations are spaced equidistantly in time.
Alignment of time course gene expression data sets using Hidden Markov Models
12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide

Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.
Estimating transmission parameters for the swine flu pandemic
15:10 Fri 23 Sep, 2011 :: 7.15 Ingkarni Wardli :: Dr Kathryn Glass :: Australian National University

Following the onset of a new strain of influenza with pandemic potential, policy makers need specific advice on how fast the disease is spreading, who is at risk, and what interventions are appropriate for slowing transmission. Mathematical models play a key role in comparing interventions and identifying the best response, but models are only as good as the data that inform them. In the early stages of the 2009 swine flu outbreak, many researchers estimated transmission parameters - particularly the reproduction number - from outbreak data. These estimates varied, and were often biased by data collection methods, misclassification of imported cases or as a result of early stochasticity in case numbers. I will discuss a number of the pitfalls in achieving good quality parameter estimates from early outbreak data, and outline how best to avoid them. One of the early indications from swine flu data was that children were disproportionately responsible for disease spread. I will introduce a new method for estimating age-specific transmission parameters from both outbreak and seroprevalence data. This approach allows us to take account of empirical data on human contact patterns, and highlights the need to allow for asymmetric mixing matrices in modelling disease transmission between age groups. Applied to swine flu data from a number of different countries, it presents a consistent picture of higher transmission from children.
Estimating disease prevalence in hidden populations
14:05 Wed 28 Sep, 2011 :: B.18 Ingkarni Wardli :: Dr Amber Tomas :: The University of Oxford

Estimating disease prevalence in "hidden" populations such as injecting drug users or men who have sex with men is an important public health issue. However, traditional design-based estimation methods are inappropriate because they assume that a list of all members of the population is available from which to select a sample. Respondent Driven Sampling (RDS) is a method developed over the last 15 years for sampling from hidden populations. Similarly to snowball sampling, it leverages the fact that members of hidden populations are often socially connected to one another. Although RDS is now used around the world, there are several common population characteristics which are known to cause estimates calculated from such samples to be significantly biased. In this talk I'll discuss the motivation for RDS, as well as some of the recent developments in methods of estimation.
Forecasting electricity demand distributions using a semiparametric additive model
15:10 Fri 16 Mar, 2012 :: B.21 Ingkarni Wardli :: Prof Rob Hyndman :: Monash University

Electricity demand forecasting plays an important role in short-term load allocation and long-term planning for future generation facilities and transmission augmentation. Planners must adopt a probabilistic view of potential peak demand levels, therefore density forecasts (providing estimates of the full probability distributions of the possible future values of the demand) are more helpful than point forecasts, and are necessary for utilities to evaluate and hedge the financial risk accrued by demand variability and forecasting uncertainty. Electricity demand in a given season is subject to a range of uncertainties, including underlying population growth, changing technology, economic conditions, prevailing weather conditions (and the timing of those conditions), as well as the general randomness inherent in individual usage. It is also subject to some known calendar effects due to the time of day, day of week, time of year, and public holidays. I will describe a comprehensive forecasting solution designed to take all the available information into account, and to provide forecast distributions from a few hours ahead to a few decades ahead. We use semi-parametric additive models to estimate the relationships between demand and the covariates, including temperatures, calendar effects and some demographic and economic variables. Then we forecast the demand distributions using a mixture of temperature simulation, assumed future economic scenarios, and residual bootstrapping. The temperature simulation is implemented through a new seasonal bootstrapping method with variable blocks. The model is being used by the state energy market operators and some electricity supply companies to forecast the probability distribution of electricity demand in various regions of Australia. It also underpinned the Victorian Vision 2030 energy strategy.
Comparison of spectral and wavelet estimators of transfer function for linear systems
12:10 Mon 18 Jun, 2012 :: B.21 Ingkarni Wardli :: Mr Mohd Aftar Abu Bakar :: University of Adelaide

We compare spectral and wavelet estimators of the response amplitude operator (RAO) of a linear system, with various input signals and added noise scenarios. The comparison is based on a model of a heaving buoy wave energy device (HBWED), which oscillates vertically as a single mode of vibration linear system. HBWEDs and other single degree of freedom wave energy devices such as the oscillating wave surge convertors (OWSC) are currently deployed in the ocean, making single degree of freedom wave energy devices important systems to both model and analyse in some detail. However, the results of the comparison relate to any linear system. It was found that the wavelet estimator of the RAO offers no advantage over the spectral estimators if both input and response time series data are noise free and long time series are available. If there is noise on only the response time series, only the wavelet estimator or the spectral estimator that uses the cross-spectrum of the input and response signals in the numerator should be used. For the case of noise on only the input time series, only the spectral estimator that uses the cross-spectrum in the denominator gives a sensible estimate of the RAO. If both the input and response signals are corrupted with noise, a modification to both the input and response spectrum estimates can provide a good estimator of the RAO. However, a combination of wavelet and spectral methods is introduced as an alternative RAO estimator. The conclusions apply for autoregressive emulators of sea surface elevation, impulse, and pseudorandom binary sequences (PRBS) inputs. However, a wavelet estimator is needed in the special case of a chirp input where the signal has a continuously varying frequency.
The fundamental theorems of invariant theory, classical and quantum
15:10 Fri 10 Aug, 2012 :: B.21 Ingkarni Wardli :: Prof Gus Lehrer :: The University of Sydney

Let V = C^n, and let (-,-) be a non-degenerate bilinear form on V , which is either symmetric or anti-symmetric. Write G for the isometry group of (V , (-,-)); thus G = O_n (C) or Sp_n (C). The first fundamental theorem (FFT) provides a set of generators for End_G(V^{\otimes r} ) (r = 1, 2, . . . ), while the second fundamental theorem (SFT) gives all relations among the generators. In 1937, Brauer formulated the FFT in terms of his celebrated 'Brauer algebra' B_r (\pm n), but there has hitherto been no similar version of the SFT. One problem has been the generic non-semisimplicity of B_r (\pm n), which caused H Weyl to call it, in his work on invariants 'that enigmatic algebra'. I shall present a solution to this problem, which shows that there is a single idempotent in B_r (\pm n), which describes all the relations. The proof is through a new 'Brauer category', in which the fundamental theorems are easily formulated, and where a calculus of tangles may be used to prove these results. There are quantum analogues of the fundamental theorems which I shall also discuss. There are numerous applications in representation theory, geometry and topology. This is joint work with Ruibin Zhang.
Crystallographic groups II: generalisations
12:10 Fri 24 May, 2013 :: Ingkarni Wardli B19 :: Dr Wolfgang Globke :: University of Adelaide

The theory of crystallographic groups acting cocompactly on Euclidean space can be extended and generalised in many different ways. For example, instead of studying discrete groups of Euclidean isometries, one can consider groups of isometries for indefinite inner products. These are the fundamental groups of compact flat pseudo-Riemannian manifolds. Still more generally, one might study group of affine transformation on n-space that are not required to preserve any bilinear form. Also, the condition of cocompactness can be dropped. In this talk, I will present some of the results obtained for these generalisations, and also discuss some of my own work on flat homogeneous pseudo-Riemannian spaces.
A new approach to pointwise heat kernel upper bounds on doubling metric measure spaces
12:10 Fri 7 Jun, 2013 :: Ingkarni Wardli B19 :: Prof Thierry Coulhon :: Australian National University

On doubling metric measure spaces endowed with a Dirichlet form and satisfying the Davies-Gaffney estimate, we show some characterisations of pointwise upper bounds of the heat kernel in terms of one-parameter weighted inequalities which correspond respectively to the Nash inequality and to a Gagliardo-Nirenberg type inequality when the volume growth is polynomial. This yields a new and simpler proof of the well-known equivalence between classical heat kernel upper bounds and the relative Faber-Krahn inequalities. We are also able to treat more general pointwise estimates where the heat kernel rate of decay is not necessarily governed by the volume growth. This is a joint work with Salahaddine Boutayeb and Adam Sikora.
Heat kernel estimates on non-compact Riemannian manifolds: why and how?
15:10 Fri 7 Jun, 2013 :: B.18 Ingkarni Wardli :: Prof Thierry Coulhon :: Australian National University

We will describe what is known and remains to be known about the connection between the large scale geometry of non-compact Riemannian manifolds (and more general metric measure spaces) and large time estimates of their heat kernel. We will show how some of these estimates can be characterised in terms of Sobolev inequalities and give applications to the boundedness of Riesz transforms.
Semiclassical restriction estimates
12:10 Fri 4 Apr, 2014 :: Ingkarni Wardli B20 :: Melissa Tacy :: University of Adelaide

Eigenfunctions of Hamiltonians arise naturally in the theory of quantum mechanics as stationary states of quantum systems. Their eigenvalues have an interpretation as the square root of E, where E is the energy of the system. We wish to better understand the high energy limit which defines the boundary between quantum and classical mechanics. In this talk I will focus on results regarding the restriction of eigenfunctions to lower dimensional subspaces, in particular to hypersurfaces. A convenient way to study such problems is to reframe them as problems in semiclassical analysis.
15:10 Fri 11 Apr, 2014 :: 5.58 Ingkarni Wardli :: Associate Professor John Middleton :: SARDI Aquatic Sciences and University of Adelaide

Aquaculture farming involves daily feeding of finfish and a subsequent excretion of nutrients into Spencer Gulf. Typically, finfish farming is done in six or so 50m diameter cages and over 600m X 600m lease sites. To help regulate the industry, it is desired that the finfish feed rates and the associated nutrient flux into the ocean are determined such that the maximum nutrient concentration c does not exceed a prescribed value (say cP) for ecosystem health. The prescribed value cP is determined by guidelines from the E.P.A. The concept is known as carrying capacity since limiting the feed rates limits the biomass of the farmed finfish. Here, we model the concentrations that arise from a constant input flux (F) of nutrients in a source region (the cage or lease) using the (depth-averaged) two dimensional, advection diffusion equation for constant and sinusoidal (tides) currents. Application of the divergence theorem to this equation results in a new scale estimate of the maximum flux F (and thus feed rate) that is given by F= cP /T* (1) where cP is the maximum allowed concentration and T* is a new time scale of “flushing” that involves both advection and diffusion. The scale estimate (1) is then shown to compare favourably with mathematically exact solutions of the advection diffusion equation that are obtained using Green’s functions and Fourier transforms. The maximum nutrient flux and associated feed rates are then estimated everywhere in Spencer Gulf through the development and validation of a hydrodynamic model. The model provides seasonal averages of the mean currents U and horizontal diffusivities KS that are needed to estimate T*. The diffusivities are estimated from a shear dispersal model of the tides which are very large in the gulf. The estimates have been provided to PIRSA Fisheries and Aquaculture to assist in the sustainable expansion of finfish aquaculture.
Estimates for eigenfunctions of the Laplacian on compact Riemannian manifolds
12:10 Fri 1 Aug, 2014 :: Ingkarni Wardli B20 :: Andrew Hassell :: Australian National University

I am interested in estimates on eigenfunctions, accurate in the high-eigenvalue limit. I will discuss estimates on the size (as measured by L^p norms) of eigenfunctions, on the whole Riemannian manifold, at the boundary, or at an interior hypersurface. The link between high-eigenvalue estimates, geometry, and the dynamics of geodesic flow will be emphasized.
Quasimodes that do not Equidistribute
13:10 Tue 19 Aug, 2014 :: Ingkarni Wardli B17 :: Shimon Brooks :: Bar-Ilan University

The QUE Conjecture of Rudnick-Sarnak asserts that eigenfunctions of the Laplacian on Riemannian manifolds of negative curvature should equidistribute in the large eigenvalue limit. For a number of reasons, it is expected that this property may be related to the (conjectured) small multiplicities in the spectrum. One way to study this relationship is to ask about equidistribution for "quasimodes"-or approximate eigenfunctions- in place of highly-degenerate eigenspaces. We will discuss the case of surfaces of constant negative curvature; in particular, we will explain how to construct some examples of sufficiently weak quasimodes that do not satisfy QUE, and show how they fit into the larger theory.
Inferring absolute population and recruitment of southern rock lobster using only catch and effort data
12:35 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: John Feenstra :: University of Adelaide

Abundance estimates from a data-limited version of catch survey analysis are compared to those from a novel one-parameter deterministic method. Bias of both methods is explored using simulation testing based on a more complex data-rich stock assessment population dynamics fishery operating model, exploring the impact of both varying levels of observation error in data as well as model process error. Recruitment was consistently better estimated than legal size population, the latter most sensitive to increasing observation errors. A hybrid of the data-limited methods is proposed as the most robust approach. A more statistically conventional error-in-variables approach may also be touched upon if enough time.
A Hybrid Markov Model for Disease Dynamics
12:35 Mon 29 Sep, 2014 :: B.19 Ingkarni Wardli :: Nicolas Rebuli :: University of Adelaide

Modelling the spread of infectious diseases is fundamental to protecting ourselves from potentially devastating epidemics. Among other factors, two key indicators for the severity of an epidemic are the size of the epidemic and the time until the last infectious individual is removed. To estimate the distribution of the size and duration of an epidemic (within a realistic population) an epidemiologist will typically use Monte Carlo simulations of an appropriate Markov process. However, the number of states in the simplest Markov epidemic model, the SIR model, is quadratic in the population size and so Monte Carlo simulations are computationally expensive. In this talk I will discuss two methods for approximating the SIR Markov process and I will demonstrate the approximation error by comparing probability distributions and estimates of the distributions of the final size and duration of an SIR epidemic.
Bilinear L^p estimates for quasimodes
12:10 Fri 14 Aug, 2015 :: Ingkarni Wardli B17 :: Melissa Tacy :: The University of Adelaide

Understanding the growth of the product of eigenfunctions $$u\cdot{}v$$ $$\Delta{}u=-\lambda^{2}u\quad{}\Delta{}v=-\mu^{2}v$$ is vital to understanding the regularity properties of non-linear PDE such as the non-linear Schr\"{o}dinger equation. In this talk I will discuss some recent results that I have obtain in collaboration with Zihua Guo and Xiaolong Han which provide a full range of estimates of the form $$||uv||_{L^{p}}\leq{}G(\lambda,\mu)||u||_{L^{2}}||v||_{L^{2}}$$ where $u$ and $v$ are approximate eigenfunctions of the Laplacian. We obtain these results by re-casting the problem to a more general related semiclassical problem.
Typhoons and Tigers
12:10 Fri 23 Oct, 2015 :: Hughes Lecture Room 322 :: Assoc. Prof. Andrew Metcalfe :: School of Mathematical Sciences

The Sundarbans, situated on the north coast of India and south west Bangladesh, are one of the world's largest mangrove regions (4100 square kilometres). In India, there are over 4 million inhabitants on the deltaic islands in the region. There is a diverse flora and fauna, and it is the only remaining habitat of the Bengal tiger. The Sundarbans is an UNESCO World Heritage Site and International Biodiversity Reserve. However, the Sundarbans are prone to flooding from the cyclones that regularly develop in the Bay of Bengal. In this talk I shall describe a stochastic model for the flood risk and explain how this can be used to make decisions about flood mitigation strategies and to provide estimates of the increase in flood risk due to rising sea levels and climate change.
Harmonic analysis of Hodge-Dirac operators
12:10 Fri 13 May, 2016 :: Eng & Maths EM205 :: Pierre Portal :: Australian National University

When the metric on a Riemannian manifold is perturbed in a rough (merely bounded and measurable) manner, do basic estimates involving the Hodge Dirac operator $D = d+d^*$ remain valid? Even in the model case of a perturbation of the euclidean metric on $\mathbb{R}^n$, this is a difficult question. For instance, the fact that the $L^2$ estimate $\|Du\|_2 \sim \|\sqrt{D^{2}}u\|_2$ remains valid for perturbed versions of $D$ was a famous conjecture made by Kato in 1961 and solved, positively, in a ground breaking paper of Auscher, Hofmann, Lacey, McIntosh and Tchamitchian in 2002. In the past fifteen years, a theory has emerged from the solution of this conjecture, making rough perturbation problems much more tractable. In this talk, I will give a general introduction to this theory, and present one of its latest results: a flexible approach to $L^p$ estimates for the holomorphic functional calculus of $D$. This is joint work with D. Frey (Delft) and A. McIntosh (ANU).
Product Hardy spaces associated to operators with heat kernel bounds on spaces of homogeneous type
12:10 Fri 19 Aug, 2016 :: Ingkarni Wardli B18 :: Lesley Ward :: University of South Australia

Much effort has been devoted to generalizing the Calder'on-Zygmund theory in harmonic analysis from Euclidean spaces to metric measure spaces, or spaces of homogeneous type. Here the underlying space R^n with Euclidean metric and Lebesgue measure is replaced by a set X with general metric or quasi-metric and a doubling measure. Further, one can replace the Laplacian operator that underpins the Calderon-Zygmund theory by more general operators L satisfying heat kernel estimates. I will present recent joint work with P. Chen, X.T. Duong, J. Li and L.X. Yan along these lines. We develop the theory of product Hardy spaces H^p_{L_1,L_2}(X_1 x X_2), for 1

Publications matching "Bilinear L^p estimates for quasimodes"

New estimates for the Kullback-Leibler distance and its applicationsI
Dragomir, S; Gluscevic, Vido, Sixth International Conference on Nonlinear Functional Analysis, Gyeongsang & Kyungnam Nat Universities, Korea 01/09/00

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.