The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
August 2019

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

Events matching "Fibonacci: order, chaos, and the Holy Grail"

Fibonacci: order, chaos, and the Holy Grail
11:10 Mon 30 Apr, 2007 :: Maths G08 :: Dr Alison Wolff :: School of Mathematical Sciences

Learning to Satisfy Actuator and Camera Networks
15:10 Fri 25 May, 2007 :: G08 Mathematics Building University of Adelaide :: Assistant Prof Mark Coates

Wireless sensor and actuator networks (SANETs) represent an important extension of sensor networks, allowing nodes within the network to make autonomous decisions and perform actions (actuation) in response to sensor measurements and shared information. SANETS combine aspects of sensor networks and multi-robot systems, and the merger gives rise to an array of challenges absent from conventional sensor networks. SANETs are active systems that must use the sensed information to modify the environment in order to elicit a desired response. This involves the development of an actuation strategy, a set of decision rules that specify how the network responds to sensed conditions. In this talk, I will discuss the challenges involved in using distributed algorithms to learn suitable actuation strategies. I will draw connections with the class of learning satisfiability problems, which includes a range of learning tasks involving multiple constraints.
An Introduction to invariant differential pairings
14:10 Tue 24 Jul, 2007 :: Mathematics G08 :: Jens Kroeske

On homogeneous spaces G/P, where G is a semi-simple Lie group and P is a parabolic subgroup (the ordinary sphere or projective spaces being examples), invariant operators, that is operators between certain homogeneous bundles (functions, vector fields or forms being amongst the typical examples) that are invariant under the action of the group G, have been studied extensively. Especially on so called hermitian symmetric spaces which arise through a 1-grading of the Lie algebra of G there exists a complete classification of first order invariant linear differential operators even on more general manifolds (that allow a so called almost hermitian structure).

This talk will introduce the notion of an invariant bilinear differential pairing between sections of the aforementioned homogeneous bundles. Moreover we will discuss a classification (excluding certain totally degenerate cases) of all first order invariant bilinear differential pairings on manifolds with an almost hermitian symmetric structure. The similarities and connections with the linear operator classification will be highlighted and discussed.

Likelihood inference for a problem in particle physics
15:10 Fri 27 Jul, 2007 :: G04 Napier Building University of Adelaide :: Prof. Anthony Davison

The Large Hadron Collider (LHC), a particle accelerator located at CERN, near Geneva, is (currently!) expected to start operation in early 2008. It is located in an underground tunnel 27km in circumference, and when fully operational, will be the world's largest and highest energy particle accelerator. It is hoped that it will provide evidence for the existence of the Higgs boson, the last remaining particle of the so-called Standard Model of particle physics. The quantity of data that will be generated by the LHC is roughly equivalent to that of the European telecommunications network, but this will be boiled down to just a few numbers. After a brief introduction, this talk will outline elements of the statistical problem of detecting the presence of a particle, and then sketch how higher order likelihood asymptotics may be used for signal detection in this context. The work is joint with Nicola Sartori, of the Università Ca' Foscari, in Venice.
Insights into the development of the enteric nervous system and Hirschsprung's disease
15:10 Fri 24 Aug, 2007 :: G08 Mathematics building University of Adelaide :: Assoc. Prof. Kerry Landman :: Department of Mathematics and Statistics, University of Melbourne

During the development of the enteric nervous system, neural crest (NC) cells must first migrate into and colonise the entire gut from stomach to anal end. The migratory precursor NC cells change type and differentiate into neurons and glia cells. These cells form the enteric nervous system, which gives rise to normal gut function and peristaltic contraction. Failure of the NC cells to invade the whole gut results in a lack of neurons in a length of the terminal intestine. This potentially fatal condition, marked by intractable constipation, is called Hirschsprung's Disease. The interplay between cell migration, cell proliferation and embryonic gut growth are important to the success of the NC cell colonisation process. Multiscale models are needed in order to model the different spatiotemporal scales of the NC invasion. For example, the NC invasion wave moves into unoccupied regions of the gut with a wave speed of around 40 microns per hour. New time-lapse techniques have shown that there is a web-like network structure within the invasion wave. Furthermore, within this network, individual cell trajectories vary considerably. We have developed a population-scale model for basic rules governing NC cell invasive behaviour incorporating the important mechanisms. The model predictions were tested experimentally. Mathematical and experimental results agreed. The results provide an understanding of why many of the genes implicated in Hirschsprung's Disease influence NC population size. Our recently developed individual cell-based model also produces an invasion wave with a well-defined wave speed; however, in addition Individual cell trajectories within the invasion wave can be extracted. Further challenges in modeling the various scales of the developmental system will be discussed.
Statistical Critique of the International Panel on Climate Change's work on Climate Change.
18:00 Wed 17 Oct, 2007 :: Union Hall University of Adelaide :: Mr Dennis Trewin

Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change. This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways. Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies. His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC. This presentation will be based on an invited paper presented at the OECD World Forum.
Add one part chaos, one part topology, and stir well...
13:10 Fri 19 Oct, 2007 :: Engineering North 132 :: Dr Matt Finn :: School of Mathematical Sciences

Stirring and mixing of fluids occurs everywhere, from adding milk to a cup of coffee, right through to industrial-scale chemical blending. So why stir in the first place? Is it possible to do it badly? And how can you make sure you do it effectively? I will attempt to answer these questions using a few thought experiments, some dynamical systems theory and a little topology.
Rubber Ballons -- Prototypes of Hysteresis
15:10 Fri 16 Nov, 2007 :: G04 Napier Building University of Adelaide :: Emeritus Prof. Ingo Muller :: Technical University Berlin

Rubber balloons are characterized by a non-monotone pressure-radius relation which presages interesting non-trivial stability problems. A stability criterion is developed and exploited in order to show that the balloon may be stabilized at any radius by loading it with a piston under an elastic spring, if only the spring is hard enough. If two connected balloons are subject to an inflation-deflation cycle, the pressure-radius curve exhibits a fairly simple hysteresis loop. More complex hysteresis loops appear when more balloons are all inflated together. And if many balloons are inflated and deflated at the same time, the hysteresis loop assumes the form reminiscent of pseudo-elasticity. Stability in those complex cases is determined by a simple suggestive argument. References: [1] W.Kitsche, I.Muller, P.Strehlow. Simulation of pseudo-elastic behaviour in a system of rubber balloons. In: Metastability and Incompletely Posed Problems, S.Antman, J.L.Ericksen, D.Kinderlehrer, I.Muller (eds.) IMA Volume No.3, Springer Verlag, New York (1987) [2] I.Muller, P.Strehlow, Rubber and Rubber Balloons, Springer Lecture Notes on Physics, Springer Verlag, Heidelberg (2004)
Global and Local stationary modelling in finance: Theory and empirical evidence
14:10 Thu 10 Apr, 2008 :: G04 Napier Building University of Adelaide :: Prof. Dominique Guégan :: Universite Paris 1 Pantheon-Sorbonne

To model real data sets using second order stochastic processes imposes that the data sets verify the second order stationarity condition. This stationarity condition concerns the unconditional moments of the process. It is in that context that most of models developed from the sixties' have been studied; We refer to the ARMA processes (Brockwell and Davis, 1988), the ARCH, GARCH and EGARCH models (Engle, 1982, Bollerslev, 1986, Nelson, 1990), the SETAR process (Lim and Tong, 1980 and Tong, 1990), the bilinear model (Granger and Andersen, 1978, Guégan, 1994), the EXPAR model (Haggan and Ozaki, 1980), the long memory process (Granger and Joyeux, 1980, Hosking, 1981, Gray, Zang and Woodward, 1989, Beran, 1994, Giraitis and Leipus, 1995, Guégan, 2000), the switching process (Hamilton, 1988). For all these models, we get an invertible causal solution under specific conditions on the parameters, then the forecast points and the forecast intervals are available.

Thus, the stationarity assumption is the basis for a general asymptotic theory for identification, estimation and forecasting. It guarantees that the increase of the sample size leads to more and more information of the same kind which is basic for an asymptotic theory to make sense.

Now non-stationarity modelling has also a long tradition in econometrics. This one is based on the conditional moments of the data generating process. It appears mainly in the heteroscedastic and volatility models, like the GARCH and related models, and stochastic volatility processes (Ghysels, Harvey and Renault 1997). This non-stationarity appears also in a different way with structural changes models like the switching models (Hamilton, 1988), the stopbreak model (Diebold and Inoue, 2001, Breidt and Hsu, 2002, Granger and Hyung, 2004) and the SETAR models, for instance. It can also be observed from linear models with time varying coefficients (Nicholls and Quinn, 1982, Tsay, 1987).

Thus, using stationary unconditional moments suggest a global stationarity for the model, but using non-stationary unconditional moments or non-stationary conditional moments or assuming existence of states suggest that this global stationarity fails and that we only observe a local stationary behavior.

The growing evidence of instability in the stochastic behavior of stocks, of exchange rates, of some economic data sets like growth rates for instance, characterized by existence of volatility or existence of jumps in the variance or on the levels of the prices imposes to discuss the assumption of global stationarity and its consequence in modelling, particularly in forecasting. Thus we can address several questions with respect to these remarks.

1. What kinds of non-stationarity affect the major financial and economic data sets? How to detect them?

2. Local and global stationarities: How are they defined?

3. What is the impact of evidence of non-stationarity on the statistics computed from the global non stationary data sets?

4. How can we analyze data sets in the non-stationary global framework? Does the asymptotic theory work in non-stationary framework?

5. What kind of models create local stationarity instead of global stationarity? How can we use them to develop a modelling and a forecasting strategy?

These questions began to be discussed in some papers in the economic literature. For some of these questions, the answers are known, for others, very few works exist. In this talk I will discuss all these problems and will propose 2 new stategies and modelling to solve them. Several interesting topics in empirical finance awaiting future research will also be discussed.

Free surface Stokes flows with surface tension
15:10 Fri 5 Sep, 2008 :: G03 Napier Building University of Adelaide :: Prof. Darren Crowdy :: Imperial College London

In this talk, we will survey a number of different free boundary problems involving slow viscous (Stokes) flows in which surface tension is active on the free boundary. Both steady and unsteady flows will be considered. Motivating applications range from industrial processes such as viscous sintering (where end-products are formed as a result of the surface-tension-driven densification of a compact of smaller particles that are heated in order that they coalesce) to biological phenomena such as understanding how organisms swim (i.e. propel themselves) at low Reynolds numbers. Common to our approach to all these problems will be an analytical/theoretical treatment of model problems via complex variable methods -- techniques well-known at infinite Reynolds numbers but used much less often in the Stokes regime. These model problems can give helpful insights into the behaviour of the true physical systems.
Boltzmann's Equations for Suspension Flow in Porous Media and Correction of the Classical Model
15:10 Fri 13 Mar, 2009 :: Napier LG29 :: Prof Pavel Bedrikovetsky :: University of Adelaide

Suspension/colloid transport in porous media is a basic phenomenon in environmental, petroleum and chemical engineering. Suspension of particles moves through porous media and particles are captured by straining or attraction. We revise the classical equations for particle mass balance and particle capture kinetics and show its non-realistic behaviour in cases of large dispersion and of flow-free filtration. In order to resolve the paradoxes, the pore-scale model is derived. The model can be transformed to Boltzmann equation with particle distribution over pores. Introduction of sink-source terms into Boltzmann equation results in much more simple calculations if compared with the traditional Chapman-Enskog averaging procedure. Technique of projecting operators in Hilbert space of Fourier images is used. The projection subspace is constructed in a way to avoid dependency of averaged equations on sink-source terms. The averaging results in explicit expressions for particle flux and capture rate. The particle flux expression describes the effect of advective particle velocity decrease if compared with the carrier water velocity due to preferential capture of "slow" particles in small pores. The capture rate kinetics describes capture from either advective or diffusive fluxes. The equations derived exhibit positive advection velocity for any dispersion and particle capture in immobile fluid that resolves the above-mentioned paradox. Finally, we discuss validation of the model for propagation of contaminants in aquifers, for filtration, for potable water production by artesian wells, for formation damage in oilfields.
15:10 Fri 9 Oct, 2009 :: MacBeth Lecture Theatre :: Prof Guyan Robertson :: University of Newcastle, UK

Buildings were created by J. Tits in order to give a systematic geometric interpretation of simple Lie groups (and of simple algebraic groups). Buildings have since found applications in many areas of mathematics. This talk will give an informal introduction to these beautiful objects.
Manifold destiny: a talk on water, fire and life
15:10 Fri 6 Nov, 2009 :: MacBeth Lecture Theatre :: Dr Sanjeeva Balasuriya :: University of Adelaide

Manifolds are important entities in dynamical systems, and organise space into regions in which different motions occur. For example, intersections between stable and unstable manifolds in discrete systems result in chaotic motion. This talk will focus on manifolds and their locations in continuous dynamical systems, and in particular on Melnikov's method and its adaptations for determining the effect of perturbations on manifolds. The relevance of such adaptations to a surprising range of applications will be shown, in addition to recent theoretical developments inspired by such problems. The applications addressed in this talk include understanding the motion of fluid near oceanic eddies and currents, optimising mixing in nano-fluidic devices in order to improve reactions, computing the speed of a flame front, and finding the spreading rate of bacterial colonies.
The fluid mechanics of gels used in tissue engineering
15:10 Fri 9 Apr, 2010 :: Santos Lecture Theatre :: Dr Edward Green :: University of Western Australia

Tissue engineering could be called 'the science of spare parts'. Although currently in its infancy, its long-term aim is to grow functional tissues and organs in vitro to replace those which have become defective through age, trauma or disease. Recent experiments have shown that mechanical interactions between cells and the materials in which they are grown have an important influence on tissue architecture, but in order to understand these effects, we first need to understand the mechanics of the gels themselves.

Many biological gels (e.g. collagen) used in tissue engineering have a fibrous microstructure which affects the way forces are transmitted through the material, and which in turn affects cell migration and other behaviours. I will present a simple continuum model of gel mechanics, based on treating the gel as a transversely isotropic viscous material. Two canonical problems are considered involving thin two-dimensional films: extensional flow, and squeezing flow of the fluid between two rigid plates. Neglecting inertia, gravity and surface tension, in each regime we can exploit the thin geometry to obtain a leading-order problem which is sufficiently tractable to allow the use of analytical methods. I discuss how these results could be exploited practically to determine the mechanical properties of real gels. If time permits, I will also talk about work currently in progress which explores the interaction between gel mechanics and cell behaviour.

"The Emperor's New Mind": computers, minds, physics and biology
11:10 Wed 21 Apr, 2010 :: Napier 210 :: Prof Tony Roberts :: University of Adelaide

In the mid-1990s the computer 'Deep Blue' beat Kasparov, the world chess champion. Will computers soon overtake us humans in other endeavours of intelligence? Roger Penrose's thesis is that human intelligence is far more subtle than has previously been imagined, that the quest for human-like artificial intelligence in computers, the holy grail of artificial intelligence, is hopeless. The argument ranges from icily clear mathematics of computation, through the amazing shadows of quantum physics, and thence to new conjectures in biology.
Topological chaos in two and three dimensions
15:10 Fri 18 Jun, 2010 :: Santos Lecture Theatre :: Dr Matt Finn :: School of Mathematical Sciences

Research into two-dimensional laminar fluid mixing has enjoyed a renaissance in the last decade since the realisation that the Thurston–Nielsen theory of surface homeomorphisms can assist in designing efficient "topologically chaotic" batch mixers. In this talk I will survey some tools used in topological fluid kinematics, including braid groups, train-tracks, dynamical systems and topological index formulae. I will then make some speculations about topological chaos in three dimensions.
A spatial-temporal point process model for fine resolution multisite rainfall data from Roma, Italy
14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology

A point process rainfall model is further developed that has storm origins occurring in space-time according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in two-dimensional space, where the storm radii are taken to be independent exponential random variables. Storm origins are of random type z, where z follows a continuous probability distribution. Cell origins occur in a further spatial Poisson process and have arrival times that follow a Neyman-Scott point process. Cell origins have random radii so that cells form discs in two-dimensional space. Statistical properties up to third order are derived and used to fit the model to 10 min series taken from 23 sites across the Roma region, Italy. Distributional properties of the observed annual maxima are compared to equivalent values sampled from series that are simulated using the fitted model. The results indicate that the model will be of use in urban drainage projects for the Roma region.
Queues with skill based routing under FCFS–ALIS regime
15:10 Fri 11 Feb, 2011 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, Israel

We consider a system where jobs of several types are served by servers of several types, and a bipartite graph between server types and job types describes feasible assignments. This is a common situation in manufacturing, call centers with skill based routing, matching of parent-child in adoption or matching in kidney transplants etc. We consider the case of first come first served policy: jobs are assigned to the first available feasible server in order of their arrivals. We consider two types of policies for assigning customers to idle servers - a random assignment and assignment to the longest idle server (ALIS) We survey some results for four different situations:

  • For a loss system we find conditions for reversibility and insensitivity.
  • For a manufacturing type system, in which there is enough capacity to serve all jobs, we discuss a product form solution and waiting times.
  • For an infinite matching model in which an infinite sequence of customers of IID types, and infinite sequence of servers of IID types are matched according to first come first, we obtain a product form stationary distribution for this system, which we use to calculate matching rates.
  • For a call center model with overload and abandonments we make some plausible observations.

This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and Ward Whitt.

Lorentzian manifolds with special holonomy
13:10 Fri 25 Mar, 2011 :: Mawson 208 :: Mr Kordian Laerz :: Humboldt University, Berlin

A parallel lightlike vector field on a Lorentzian manifold X naturally defines a foliation of codimension 1 on X and a 1-dimensional subfoliation. In the first part we introduce Lorentzian metrics on the total space of certain circle bundles in order to construct weakly irreducible Lorentzian manifolds admitting a parallel lightlike vector field such that all leaves of the foliations are compact. Then we study which holonomy representations can be realized in this way. Finally, we consider the structure of arbitrary Lorentzian manifolds for which the leaves of the foliations are compact.
Modelling computer network topologies through optimisation
12:10 Mon 1 Aug, 2011 :: 5.57 Ingkarni Wardli :: Mr Rhys Bowden :: University of Adelaide

The core of the Internet is made up of many different computers (called routers) in many different interconnected networks, owned and operated by many different organisations. A popular and important field of study in the past has been "network topology": for instance, understanding which routers are connected to which other routers, or which networks are connected to which other networks; that is, studying and modelling the connection structure of the Internet. Previous study in this area has been plagued by unreliable or flawed experimental data and debate over appropriate models to use. The Internet Topology Zoo is a new source of network data created from the information that network operators make public. In order to better understand this body of network information we would like the ability to randomly generate network topologies resembling those in the zoo. Leveraging previous wisdom on networks produced as a result of optimisation processes, we propose a simple objective function based on possible economic constraints. By changing the relative costs in the objective function we can change the form of the resulting networks, and we compare these optimised networks to a variety of networks found in the Internet Topology Zoo.
AustMS/AMSI Mahler Lecture: Chaos, quantum mechanics and number theory
18:00 Tue 9 Aug, 2011 :: Napier 102 :: Prof Peter Sarnak :: Institute for Advanced Study, Princeton

The correspondence principle in quantum mechanics is concerned with the relation between a mechanical system and its quantization. When the mechanical system are relatively orderly ("integrable"), then this relation is well understood. However when the system is chaotic much less is understood. The key features already appear and are well illustrated in the simplest systems which we will review. For chaotic systems defined number-theoretically, much more is understood and the basic problems are connected with central questions in number theory. The Mahler lectures are a biennial activity organised by the Australian Mathematical Society with the assistance of the Australian Mathematical Sciences Institute.
Alignment of time course gene expression data sets using Hidden Markov Models
12:10 Mon 5 Sep, 2011 :: 5.57 Ingkarni Wardli :: Mr Sean Robinson :: University of Adelaide

Time course microarray experiments allow for insight into biological processes by measuring gene expression over a time period of interest. This project is concerned with time course data from a microarray experiment conducted on a particular variety of grapevine over the development of the grape berries at a number of different vineyards in South Australia. The aim of the project is to construct a methodology for combining the data from the different vineyards in order to obtain more precise estimates of the underlying behaviour of the genes over the development process. A major issue in doing so is that the rate of development of the grape berries is different at different vineyards. Hidden Markov models (HMMs) are a well established methodology for modelling time series data in a number of domains and have been previously used for gene expression analysis. Modelling the grapevine data presents a unique modelling issue, namely the alignment of the expression profiles needed to combine the data from different vineyards. In this seminar, I will describe our problem, review HMMs, present an extension to HMMs and show some preliminary results modelling the grapevine data.
Noncritical holomorphic functions of finite growth on algebraic Riemann surfaces
13:10 Fri 3 Feb, 2012 :: B.20 Ingkarni Wardli :: Prof Franc Forstneric :: University of Ljubljana

Given a compact Riemann surface X and a point p in X, we construct a holomorphic function without critical points on the punctured (algebraic) Riemann surface R=X-p which is of finite order at the point p. In the case at hand this improves the 1967 theorem of Gunning and Rossi to the effect that every open Riemann surface admits a noncritical holomorphic function, but without any particular growth condition. (Joint work with Takeo Ohsawa.)
Two classes of network structures that enable efficient information transmission
15:10 Fri 7 Sep, 2012 :: B.20 Ingkarni Wardli :: A/Prof Sanming Zhou :: The University of Melbourne

What network topologies should we use in order to achieve efficient information transmission? Of course answer to this question depends on how we measure efficiency of information dissemination. If we measure it by the minimum gossiping time under the store-and-forward, all-port and full-duplex model, we show that certain Cayley graphs associated with Frobenius groups are `perfect' in a sense. (A Frobenius group is a permutation group which is transitive but not regular such that only the identity element can fix two points.) Such graphs are also optimal for all-to-all routing in the sense that the maximum load on edges achieves the minimum. In this talk we will discuss this theory of optimal network design.
The advection-diffusion-reaction equation on the surface of the sphere
12:10 Mon 24 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Kale Davies :: University of Adelaide

We aim to solve the advection-diffusion-reaction equation on the surface of a sphere. In order to do this we will be required to utilise spherical harmonics, a set of solutions to Laplace's equation in spherical coordinates. Upon solving the equations, we aim to find a set of parameters that cause a localised concentration to be maintained in the flow, referred to as a hotspot. In this talk I will discuss the techniques that are required to numerically solve this problem and the issues that occur/how to deal with these issues when searching for hotspot solutions.
Towards understanding fundamental interactions for nanotechnology
15:10 Fri 5 Oct, 2012 :: B.20 Ingkarni Wardli :: Dr Doreen Mollenhauer :: MacDiarmid Institute for Advanced Materials and Nanotechnology, Wellington

Multiple simultaneous interactions show unique collective properties that are qualitatively different from properties displayed by their monovalent constituents. Multivalent interactions play an important role for the self-organization of matter, recognition processes and signal transduction. A broad understanding of these interactions is therefore crucial in order to answer central questions and make new developments in the field of biotechnology and material science. In the framework of a joint experimental and theoretical project we study the electronic effects in monovalent and multivalent interactions by doing quantum chemical calculations. The particular interest of our investigations is in organic molecules interacting with gold nanoparticles or graphene. The main purpose is to analyze the nature of multivalent bonding in comparison to monovalent interaction.
Filtering Theory in Modelling the Electricity Market
12:10 Mon 6 May, 2013 :: B.19 Ingkarni Wardli :: Ahmed Hamada :: University of Adelaide

In mathematical finance, as in many other fields where applied mathematics is a powerful tool, we assume that a model is good enough when it captures different sources of randomness affecting the quantity of interests, which in this case is the electricity prices. The power market is very different from other markets in terms of the randomness sources that can be observed in the prices feature and evolution. We start from suggesting a new model that simulates the electricity prices, this new model is constructed by adding a periodicity term, a jumps terms and a positives mean reverting term. The later term is driven by a non-observable Markov process. So in order to prices some financial product, we have to use some of the filtering theory to deal with the non-observable process, these techniques are gaining very much of interest from practitioners and researchers in the field of financial mathematics.
Four hats, three prisoners, two colours and a jailer
12:35 Mon 5 Aug, 2013 :: B.19 Ingkarni Wardli :: Kale Davies :: University of Adelaide

It was a dark and stormy night. Theodore Jailer sat alone in his office scrawling notes on a piece of paper, muttering to himself in frustration. Suddenly he stops, his eyes widen in excitement and a smile spreads across his face. No, not a smile, but a grimace, for you see, evil was afoot! For Jailer, who was the jailer at a local prison had devised a nefarious scheme in order to execute all of the prisoners once and for all. Can his evil plans be thwarted in time? Stay tuned to find out!
Shannon entropy as a diagnostic tool for PDEs in conservation form
15:10 Fri 16 Aug, 2013 :: B.18 Ingkarni Wardli :: Prof Philip Broadbridge :: La Trobe University

After normalization, an evolving real non-negative function may be viewed as a probability density. From this we may derive the corresponding evolution law for Shannon entropy. Parabolic equations, hyperbolic equations and fourth-order diffusion equations evolve information in quite different ways. Entropy and irreversibility can be introduced in a self-consistent manner and at an elementary level by reference to some simple evolution equations such as the linear heat equation. It is easily seen that the 2nd law of thermodynamics is equivalent to loss of Shannon information when temperature obeys a general nonlinear 2nd order diffusion equation. With fourth order diffusion terms, new problems arise. We know from applications such as thin film flow and surface diffusion, that fourth order diffusion terms may generate ripples and they do not satisfy the Second Law. Despite this, we can identify the class of fourth order quasilinear diffusion equations that increase the Shannon entropy.
Symmetry gaps for geometric structures
15:10 Fri 20 Sep, 2013 :: B.18 Ingkarni Wardli :: Dr Dennis The :: Australian National University

Klein's Erlangen program classified geometries based on their (transitive) groups of symmetries, e.g. Euclidean geometry is the quotient of the rigid motion group by the subgroup of rotations. While this perspective is homogeneous, Riemann's generalization of Euclidean geometry is in general very "lumpy" - i.e. there exist Riemannian manifolds that have no symmetries at all. A common generalization where a group still plays a dominant role is Cartan geometry, which first arose in Cartan's solution to the equivalence problem for geometric structures, and which articulates what a "curved version" of a flat (homogeneous) model means. Parabolic geometries are Cartan geometries modelled on (generalized) flag varieties (e.g. projective space, isotropic Grassmannians) which are well-known objects from the representation theory of semisimple Lie groups. These curved versions encompass a zoo of interesting geometries, including conformal, projective, CR, systems of 2nd order ODE, etc. This interaction between differential geometry and representation theory has proved extremely fruitful in recent years. My talk will be an example-based tour of various types of parabolic geometries, which I'll use to outline some of the main aspects of the theory (suppressing technical details). The main thread throughout the talk will be the symmetry gap problem: For a given type of Cartan geometry, the maximal symmetry dimension is realized by the flat model, but what is the next possible ("submaximal") symmetry dimension? I'll sketch a recent solution (in joint work with Boris Kruglikov) for a wide class of parabolic geometries which gives a combinatorial recipe for reading the submaximal symmetry dimension from a Dynkin diagram.
All at sea with spectral analysis
11:10 Tue 19 Nov, 2013 :: Ingkarni Wardli Level 5 Room 5.56 :: A/Prof Andrew Metcalfe :: The University of Adelaide

The steady state response of a single degree of freedom damped linear stystem to a sinusoidal input is a sinusoidal function at the same frequency, but generally with a different amplitude and a phase shift. The analogous result for a random stationary input can be described in terms of input and response spectra and a transfer function description of the linear system. The practical use of this result is that the parameters of a linear system can be estimated from the input and response spectra, and the response spectrum can be predicted if the transfer function and input spectrum are known. I shall demonstrate these results with data from a small ship in the North Sea. The results from the sea trial raise the issue of non-linearity, and second order amplitude response functons are obtained using auto-regressive estimators. The possibility of using wavelets rather than spectra is consedred in the context of single degree of freedom linear systems. Everybody welcome to attend. Please not a change of venue - we will be in room 5.56
The Mandelbrot Set
12:10 Mon 5 May, 2014 :: B.19 Ingkarni Wardli :: David Bowman :: University of Adelaide

The Mandelbrot set is an icon of modern mathematics, an image which fires the popular imagination when accompanied by the words 'chaos' and 'fractal'. However, few could give even a vague definition of this mysterious set and fewer still know the mathematical meaning behind it. In this talk we will be looking at the role that the Mandelbrot set plays in complex dynamics, the study of iterated complex valued functions. We shall discuss attracting and repelling cycles and how they are related to the different components of the Mandelbrot set.
Group meeting
15:10 Fri 6 Jun, 2014 :: 5.58 Ingkarni Wardli :: Meng Cao and Trent Mattner :: University of Adelaide

Meng Cao:: Multiscale modelling couples patches of nonlinear wave-like simulations :: Abstract: The multiscale gap-tooth scheme is built from given microscale simulations of complicated physical processes to empower macroscale simulations. By coupling small patches of simulations over unsimulated physical gaps, large savings in computational time are possible. So far the gap-tooth scheme has been developed for dissipative systems, but wave systems are also of great interest. This article develops the gap-tooth scheme to the case of nonlinear microscale simulations of wave-like systems. Classic macroscale interpolation provides a generic coupling between patches that achieves arbitrarily high order consistency between the multiscale scheme and the underlying microscale dynamics. Eigen-analysis indicates that the resultant gap-tooth scheme empowers feasible computation of large scale simulations of wave-like dynamics with complicated underlying physics. As an pilot study, we implement numerical simulations of dam-breaking waves by the gap-tooth scheme. Comparison between a gap-tooth simulation, a microscale simulation over the whole domain, and some published experimental data on dam breaking, demonstrates that the gap-tooth scheme feasibly computes large scale wave-like dynamics with computational savings. Trent Mattner :: Coupled atmosphere-fire simulations of the Canberra 2003 bushfires using WRF-Sfire :: Abstract: The Canberra fires of January 18, 2003 are notorious for the extreme fire behaviour and fire-atmosphere-topography interactions that occurred, including lee-slope fire channelling, pyrocumulonimbus development and tornado formation. In this talk, I will discuss coupled fire-weather simulations of the Canberra fires using WRF-SFire. In these simulations, a fire-behaviour model is used to dynamically predict the evolution of the fire front according to local atmospheric and topographic conditions, as well as the associated heat and moisture fluxes to the atmosphere. It is found that the predicted fire front and heat flux is not too bad, bearing in mind the complexity of the problem and the severe modelling assumptions made. However, the predicted moisture flux is too low, which has some impact on atmospheric dynamics.
Hydrodynamics and rheology of self-propelled colloids
15:10 Fri 8 Aug, 2014 :: B17 Ingkarni Wardli :: Dr Sarthok Sircar :: University of Adelaide

The sub-cellular world has many components in common with soft condensed matter systems (polymers, colloids and liquid crystals). But it has novel properties, not present in traditional complex fluids, arising from a rich spectrum of non-equilibrium behavior: flocking, chemotaxis and bioconvection. The talk is divided into two parts. In the first half, we will (get an idea on how to) derive a hydrodynamic model for self-propelled particles of an arbitrary shape from first principles, in a sufficiently dilute suspension limit, moving in a 3-dimensional space inside a viscous solvent. The model is then restricted to particles with ellipsoidal geometry to quantify the interplay of the long-range excluded volume and the short-range self-propulsion effects. The expression for the constitutive stresses, relating the kinetic theory with the momentum transport equations, are derived using a combination of the virtual work principle (for extra elastic stresses) and symmetry arguments (for active stresses). The second half of the talk will highlight on my current numerical expertise. In particular we will exploit a specific class of spectral basis functions together with RK4 time-stepping to determine the dynamical phases/structures as well as phase-transitions of these ellipsoidal clusters. We will also discuss on how to define the order (or orientation) of these clusters and understand the other rheological quantities.
The Dirichlet problem for the prescribed Ricci curvature equation
12:10 Fri 15 Aug, 2014 :: Ingkarni Wardli B20 :: Artem Pulemotov :: University of Queensland

We will discuss the following question: is it possible to find a Riemannian metric whose Ricci curvature is equal to a given tensor on a manifold M? To answer this question, one must analyze a weakly elliptic second-order geometric PDE. In the first part of the talk, we will review the history of the subject and state several classical theorems. After that, our focus will be on new results concerning the case where M has nonempty boundary.
Neural Development of the Visual System: a laminar approach
15:10 Fri 29 Aug, 2014 :: N132 Engineering North :: Dr Andrew Oster :: Eastern Washington University

In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activity-dependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripe-like manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with "blobs" periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leading-order dynamics of the system.
Neural Development of the Visual System: a laminar approach
15:10 Fri 29 Aug, 2014 :: This talk will now be given as a School Colloquium :: Dr Andrew Oster :: Eastern Washington University

In this talk, we will introduce the architecture of the visual system in higher order primates and cats. Through activity-dependent plasticity mechanisms, the left and right eye streams segregate in the cortex in a stripe-like manner, resulting in a pattern called an ocular dominance map. We introduce a mathematical model to study how such a neural wiring pattern emerges. We go on to consider the joint development of the ocular dominance map with another feature of the visual system, the cytochrome oxidase blobs, which appear in the center of the ocular dominance stripes. Since cortex is in fact comprised of layers, we introduce a simple laminar model and perform a stability analysis of the wiring pattern. This intricate biological structure (ocular dominance stripes with 'blobs' periodically distributed in their centers) can be understood as occurring due to two Turing instabilities combined with the leading-order dynamics of the system.
Modelling biological gel mechanics
12:10 Mon 8 Sep, 2014 :: B.19 Ingkarni Wardli :: James Reoch :: University of Adelaide

The behaviour of gels such as collagen is the result of complex interactions between mechanical and chemical forces. In this talk, I will outline the modelling approaches we are looking at in order to incorporate the influence of cell behaviour alongside chemical potentials, and the various circumstances which lead to gel swelling and contraction.
Topology Tomography with Spatial Dependencies
15:00 Tue 25 Nov, 2014 :: Engineering North N132 :: Darryl Veitch :: The University of Melbourne

There has been quite a lot of tomography inference work on measurement networks with a tree topology. Here observations are made, at the leaves of the tree, of `probes' sent down from the root and copied at each branch point. Inference can be performed based on loss or delay information carried by probes, and used in order to recover loss parameters, delay parameters, or the topology, of the tree. In all of these a strong assumption of spatial independence between links in the tree has been made in prior work. I will describe recent work on topology inference, based on loss measurement, which breaks that assumption. In particular I will introduce a new model class for loss with non trivial spatial dependence, the `Jump Independent Models', which are well motivated, and prove that within this class the topology is identifiable.
Boundary behaviour of Hitchin and hypo flows with left-invariant initial data
12:10 Fri 27 Feb, 2015 :: Ingkarni Wardli B20 :: Vicente Cortes :: University of Hamburg

Hitchin and hypo flows constitute a system of first order pdes for the construction of Ricci-flat Riemannian mertrics of special holonomy in dimensions 6, 7 and 8. Assuming that the initial geometric structure is left-invariant, we study whether the resulting Ricci-flat manifolds can be extended in a natural way to complete Ricci-flat manifolds. This talk is based on joint work with Florin Belgun, Marco Freibert and Oliver Goertsches, see arXiv:1405.1866 (math.DG).
Multivariate regression in quantitative finance: sparsity, structure, and robustness
15:10 Fri 1 May, 2015 :: Engineering North N132 :: A/Prof Mark Coates :: McGill University

Many quantitative hedge funds around the world strive to predict future equity and futures returns based on many sources of information, including historical returns and economic data. This leads to a multivariate regression problem. Compared to many regression problems, the signal-to-noise ratio is extremely low, and profits can be realized if even a small fraction of the future returns can be accurately predicted. The returns generally have heavy-tailed distributions, further complicating the regression procedure.

In this talk, I will describe how we can impose structure into the regression problem in order to make detection and estimation of the very weak signals feasible. Some of this structure consists of an assumption of sparsity; some of it involves identification of common factors to reduce the dimension of the problem. I will also describe how we can formulate alternative regression problems that lead to more robust solutions that better match the performance metrics of interest in the finance setting.

The twistor equation on Lorentzian Spin^c manifolds
12:10 Fri 15 May, 2015 :: Napier 144 :: Andree Lischewski :: University of Adelaide

In this talk I consider a conformally covariant spinor field equation, called the twistor equation, which can be formulated on any Lorentzian Spin^c manifold. Its solutions have become of importance in the study of supersymmetric field theories in recent years and were named "charged conformal Killing spinors". After a short review of conformal Spin^c geometry in Lorentzian signature, I will briefly discuss the emergence of charged conformal Killing spinors in supergravity. I will then focus on special geometric structures related to the twistor equation and use charged conformal Killing spinors in order to establish a link between conformal and CR geometry.
Can mathematics help save energy in computing?
15:10 Fri 22 May, 2015 :: Engineering North N132 :: Prof Markus Hegland :: ANU


Recent development of computational hardware is characterised by two trends: 1. High levels of duplication of computational capabilities in multicore, parallel and GPU processing, and, 2. Substantially faster development of the speed of computational technology compared to communication technology

A consequence of these two trends is that energy costs of modern computing devices from mobile phones to supercomputers are increasingly dominated by communication costs. In order to save energy one would thus need to reduce the amount of data movement within the computer. This can be achieved by recomputing results instead of communicating them. The resulting increase in computational redundancy may also be used to make the computations more robust against hardware faults. Paradoxically, by doing more (computations) we do use less (energy).

This talk will first discuss for a simple example how a mathematical understanding can be applied to improve computational results using extrapolation. Then the problem of energy consumption in computational hardware will be considered. Finally some recent work will be discussed which shows how redundant computing is used to mitigate computational faults and thus to save energy.

Big things are weird
12:10 Mon 25 May, 2015 :: Napier LG29 :: Luke Keating-Hughes :: University of Adelaide

The pyramids of Giza, the depths of the Mariana trench, the massive Einstein Cross Quasar; all of these things are big and weird. Big weird things aren't just apparent in the physical world though, they appear in mathematics too! In this talk I will try to motivate a mathematical big thing and then show that it is weird. In particular, we will introduce the necessary topology and homotopy theory in order to show that although all finite dimensional spheres are (almost canonically) non-contractible spaces - an infinite dimensional sphere IS contractible! This result's significance will then be explained in the context of Kuiper's Theorem if time permits.
Queues and cooperative games
15:00 Fri 18 Sep, 2015 :: Ingkarni Wardli B21 :: Moshe Haviv :: Department of Statistics and the Federmann Center for the Study of Rationality, The Hebrew Universit

The area of cooperative game theory deals with models in which a number of individuals, called players, can form coalitions so as to improve the utility of its members. In many cases, the formation of the grand coalition is a natural result of some negotiation or a bargaining procedure. The main question then is how the players should split the gains due to their cooperation among themselves. Various solutions have been suggested among them the Shapley value, the nucleolus and the core.

Servers in a queueing system can also join forces. For example, they can exchange service capacity among themselves or serve customers who originally seek service at their peers. The overall performance improves and the question is how they should split the gains, or, equivalently, how much each one of them needs to pay or be paid in order to cooperate with the others. Our major focus is in the core of the resulting cooperative game and in showing that in many queueing games the core is not empty.

Finally, customers who are served by the same server can also be looked at as players who form a grand coalition, now inflicting damage on each other in the form of additional waiting time. We show how cooperative game theory, specifically the Aumann-Shapley prices, leads to a way in which this damage can be attributed to individual customers or groups of customers.
Analytic complexity of bivariate holomorphic functions and cluster trees
12:10 Fri 2 Oct, 2015 :: Ingkarni Wardli B17 :: Timur Sadykov :: Plekhanov University, Moscow

The Kolmogorov-Arnold theorem yields a representation of a multivariate continuous function in terms of a composition of functions which depend on at most two variables. In the analytic case, understanding the complexity of such a representation naturally leads to the notion of the analytic complexity of (a germ of) a bivariate multi-valued analytic function. According to Beloshapka's local definition, the order of complexity of any univariate function is equal to zero while the n-th complexity class is defined recursively to consist of functions of the form a(b(x,y)+c(x,y)), where a is a univariate analytic function and b and c belong to the (n-1)-th complexity class. Such a represenation is meant to be valid for suitable germs of multi-valued holomorphic functions. A randomly chosen bivariate analytic functions will most likely have infinite analytic complexity. However, for a number of important families of special functions of mathematical physics their complexity is finite and can be computed or estimated. Using this, we introduce the notion of the analytic complexity of a binary tree, in particular, a cluster tree, and investigate its properties.
Modelling Directionality in Stationary Geophysical Time Series
12:10 Mon 12 Oct, 2015 :: Benham Labs G10 :: Mohd Mahayaudin Mansor :: University of Adelaide

Many time series show directionality inasmuch as plots again-st time and against time-to-go are qualitatively different, and there is a range of statistical tests to quantify this effect. There are two strategies for allowing for directionality in time series models. Linear models are reversible if and only if the noise terms are Gaussian, so one strategy is to use linear models with non-Gaussian noise. The alternative is to use non-linear models. We investigate how non-Gaussian noise affects directionality in a first order autoregressive process AR(1) and compare this with a threshold autoregressive model with two thresholds. The findings are used to suggest possible improvements to an AR(9) model, identified by an AIC criterion, for the average yearly sunspot numbers from 1700 to 1900. The improvement is defined in terms of one-step-ahead forecast errors from 1901 to 2014.
A Semi-Markovian Modeling of Limit Order Markets
13:00 Fri 11 Dec, 2015 :: Ingkarni Wardli 5.57 :: Anatoliy Swishchuk :: University of Calgary

R. Cont and A. de Larrard (SIAM J. Financial Mathematics, 2013) introduced a tractable stochastic model for the dynamics of a limit order book, computing various quantities of interest such as the probability of a price increase or the diffusion limit of the price process. As suggested by empirical observations, we extend their framework to 1) arbitrary distributions for book events inter-arrival times (possibly non-exponential) and 2) both the nature of a new book event and its corresponding inter-arrival time depend on the nature of the previous book event. We do so by resorting to Markov renewal processes to model the dynamics of the bid and ask queues. We keep analytical tractability via explicit expressions for the Laplace transforms of various quantities of interest. Our approach is justified and illustrated by calibrating the model to the five stocks Amazon, Apple, Google, Intel and Microsoft on June 21st 2012. As in Cont and Larrard, the bid-ask spread remains constant equal to one tick, only the bid and ask queues are modelled (they are independent from each other and get reinitialized after a price change), and all orders have the same size. (This talk is based on our joint paper with Nelson Vadori (Morgan Stanley)).
Chaos in dimensions 2 and 3
15:10 Fri 18 Mar, 2016 :: Engineering South S112 :: Dr Andy Hammerlindl :: Monash University

I will talk about known models of chaotic dynamical systems in dimensions two and three, and results which classify the types of chaotic dynamics that are robust under perturbation. I will also talk about my own work towards understanding chaotic dynamics for discrete-time systems in dimension three. This is joint work with C. Bonatti, A. Gogolev, and R. Potrie.
Probabilistic Meshless Methods for Bayesian Inverse Problems
15:10 Fri 5 Aug, 2016 :: Engineering South S112 :: Dr Chris Oates :: University of Technology Sydney

This talk deals with statistical inverse problems that involve partial differential equations (PDEs) with unknown parameters. Our goal is to account, in a rigorous way, for the impact of discretisation error that is introduced at each evaluation of the likelihood due to numerical solution of the PDE. In the context of meshless methods, the proposed, model-based approach to discretisation error encourages statistical inferences to be more conservative in the presence of significant solver error. In addition, (i) a principled learning-theoretic approach to minimise the impact of solver error is developed, and (ii) the challenge of non-linear PDEs is considered. The method is applied to parameter inference problems in which non-negligible solver error must be accounted for in order to draw valid statistical conclusions.
A principled experimental design approach to big data analysis
15:10 Fri 23 Sep, 2016 :: Napier G03 :: Prof Kerrie Mengersen :: Queensland University of Technology

Big Datasets are endemic, but they are often notoriously difficult to analyse because of their size, complexity, history and quality. The purpose of this paper is to open a discourse on the use of modern experimental design methods to analyse Big Data in order to answer particular questions of interest. By appeal to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has wide generality and advantageous inferential and computational properties. In particular, the principled experimental design approach is shown to provide a flexible framework for analysis that, for certain classes of objectives and utility functions, delivers equivalent answers compared with analyses of the full dataset. It can also provide a formalised method for iterative parameter estimation, model checking, identification of data gaps and evaluation of data quality. Finally it has the potential to add value to other Big Data sampling algorithms, in particular divide-and-conquer strategies, by determining efficient sub-samples.
Plumbing regular closed polygonal curves
12:10 Mon 22 May, 2017 :: Inkgarni Wardli Conference Room 715 :: Dr Barry Cox :: School of Mathematical Sciences

In 1980 the following puzzle appeared in Mathematics Magazine: A certain mathematician, in order to make ends meet, moonlights as an apprentice plumber. One night, as the mathematician contemplated a pile of straight pipes of equal lengths and right-angled elbows, the following question occurred to this mathematician: ``For which positive integers n could I form a closed polygonal curve using n such straight pipes and n elbows?'' It turns out that it is possible for any even number n greater than or equal to 4 and any odd number n greater than or equal to 7. However the case n=7 is particularly interesting because it can be done one of two ways and the problem is related to that of determining all the possible conformations of the molecule cyclo-heptane, although the angles in cyclo-heptane are not right angles. This raises the questions: ``Do the two solutions to the maths puzzle with right-angles correspond to the two principal conformations of cyclo-heptane?'', and ``How many solutions/conformations exist for other elbow angles?'' These and other issues will be discussed.
Conway's Rational Tangle
12:10 Tue 15 Aug, 2017 :: Inkgarni Wardli 5.57 :: Dr Hang Wang :: School of Mathematical Sciences

Many researches in mathematics essentially feature some classification problems. In this context, invariants are created in order to associate algebraic quantities, such as numbers and groups, to elements of interested classes of geometric objects, such as surfaces. A key property of an invariant is that it does not change under ``allowable moves'' which can be specified in various geometric contexts. We demonstrate these lines of ideas by rational tangles, a notion in knot theory. A tangle is analogous to a link except that it has free ends. Conway's rational tangles are the simplest tangles that can be ``unwound'' under a finite sequence of two simple moves, and they arise as building blocks for knots. A numerical invariant will be introduced for Conway's rational tangles and it provides the only known example of a complete invariant in knot theory.
Topology as a tool in algebra
15:10 Fri 8 Sep, 2017 :: Ingkarni Wardli B17 :: Dr Zsuzsanna Dancso :: University of Sydney

Topologists often use algebra in order to understand the shape of a space: invariants such as homology and cohomology are basic, and very successful, examples of this principle. Although topology is used as a tool in algebra less often, I will describe a recurring pattern on the border of knot theory and quantum algebra where this is possible. We will explore how the tangled topology of "flying circles in R^3" is deeply related to a famous problem in Lie theory: the Kashiwara-Vergne (KV) problem (first solved in 2006 by Alekseev-Meinrenken). I will explain how this relationship illuminates the intricate algebra of the KV problem.
On the fundamental of Rayleigh-Taylor instability and interfacial mixing
15:10 Fri 15 Sep, 2017 :: Ingkarni Wardli B17 :: Prof Snezhana Abarzhi :: University of Western Australia

Rayleigh-Taylor instability (RTI) develops when fluids of different densities are accelerated against their density gradient. Extensive interfacial mixing of the fluids ensues with time. Rayleigh-Taylor (RT) mixing controls a broad variety of processes in fluids, plasmas and materials, in high and low energy density regimes, at astrophysical and atomistic scales. Examples include formation of hot spot in inertial confinement, supernova explosion, stellar and planetary convection, flows in atmosphere and ocean, reactive and supercritical fluids, material transformation under impact and light-material interaction. In some of these cases (e.g. inertial confinement fusion) RT mixing should be tightly mitigated; in some others (e.g. turbulent combustion) it should be strongly enhanced. Understanding the fundamentals of RTI is crucial for achieving a better control of non-equilibrium processes in nature and technology. Traditionally, it was presumed that RTI leads to uncontrolled growth of small-scale imperfections, single-scale nonlinear dynamics, and extensive mixing that is similar to canonical turbulence. The recent success of the theory and experiments in fluids and plasmas suggests an alternative scenario of RTI evolution. It finds that the interface is necessary for RT mixing to accelerate, the acceleration effects are strong enough to suppress the development of turbulence, and the RT dynamics is multi-scale and has significant degree of order. This talk presents a physics-based consideration of fundamentals of RTI and RT mixing, and summarizes what is certain and what is not so certain in our knowledge of RTI. The focus question - How to influence the regularization process in RT mixing? We also discuss new opportunities for improvements of predictive modeling capabilities, physical description, and control of RT mixing in fluids, plasmas and materials.
Models, machine learning, and robotics: understanding biological networks
15:10 Fri 16 Mar, 2018 :: Horace Lamb 1022 :: Prof Steve Oliver :: University of Cambridge

The availability of complete genome sequences has enabled the construction of computer models of metabolic networks that may be used to predict the impact of genetic mutations on growth and survival. Both logical and constraint-based models of the metabolic network of the model eukaryote, the ale yeast Saccharomyces cerevisiae, have been available for some time and are continually being improved by the research community. While such models are very successful at predicting the impact of deleting single genes, the prediction of the impact of higher order genetic interactions is a greater challenge. Initial studies of limited gene sets provided encouraging results. However, the availability of comprehensive experimental data for the interactions between genes involved in metabolism demonstrated that, while the models were able to predict the general properties of the genetic interaction network, their ability to predict interactions between specific pairs of metabolic genes was poor. I will examine the reasons for this poor performance and demonstrate ways of improving the accuracy of the models by exploiting the techniques of machine learning and robotics. The utility of these metabolic models rests on the firm foundations of genome sequencing data. However, there are two major problems with these kinds of network models - there is no dynamics, and they do not deal with the uncertain and incomplete nature of much biological data. To deal with these problems, we have developed the Flexible Nets (FNs) modelling formalism. FNs were inspired by Petri Nets and can deal with missing or uncertain data, incorporate both dynamics and regulation, and also have the potential for model predictive control of biotechnological processes.
Chaos in higher-dimensional complex dynamics
13:10 Fri 20 Apr, 2018 :: Barr Smith South Polygon Lecture theatre :: Finnur Larusson :: University of Adelaide

I will report on new joint work with Leandro Arosio (University of Rome, Tor Vergata). Complex manifolds can be thought of as laid out across a spectrum characterised by rigidity at one end and flexibility at the other. On the rigid side, Kobayashi-hyperbolic manifolds have at most a finite-dimensional group of symmetries. On the flexible side, there are manifolds with an extremely large group of holomorphic automorphisms, the prototypes being the affine spaces $\mathbb C^n$ for $n \geq 2$. From a dynamical point of view, hyperbolicity does not permit chaos. An endomorphism of a Kobayashi-hyperbolic manifold is non-expansive with respect to the Kobayashi distance, so every family of endomorphisms is equicontinuous. We show that not only does flexibility allow chaos: under a strong anti-hyperbolicity assumption, chaotic automorphisms are generic. A special case of our main result is that if $G$ is a connected complex linear algebraic group of dimension at least 2, not semisimple, then chaotic automorphisms are generic among all holomorphic automorphisms of $G$ that preserve a left- or right-invariant Haar form. For $G=\mathbb C^n$, this result was proved (although not explicitly stated) some 20 years ago by Fornaess and Sibony. Our generalisation follows their approach. I will give plenty of context and background, as well as some details of the proof of the main result.
Quantifying language change
15:10 Fri 1 Jun, 2018 :: Horace Lamb 1022 :: A/Prof Eduardo Altmann :: University of Sydney

Mathematical methods to study natural language are increasingly important because of the ubiquity of textual data in the Internet. In this talk I will discuss mathematical models and statistical methods to quantify the variability of language, with focus on two problems: (i) How the vocabulary of languages changed over the last centuries? (ii) How the language of scientific disciplines relate to each other and evolved in the last decades? One of the main challenges of these analyses stem from universal properties of word frequencies, which show high temporal variability and are fat-tailed distributed. The later feature dramatically affects the statistical properties of entropy-based estimators, which motivates us to compare vocabularies using a generalized Jenson-Shannon divergence (obtained from entropies of order alpha).
Quantifying language change
15:10 Fri 1 Jun, 2018 :: Napier 208 :: A/Prof Eduardo Altmann :: University of Sydney

Mathematical methods to study natural language are increasingly important because of the ubiquity of textual data in the Internet. In this talk I will discuss mathematical models and statistical methods to quantify the variability of language, with focus on two problems: (i) How the vocabulary of languages changed over the last centuries? (ii) How the language of scientific disciplines relate to each other and evolved in the last decades? One of the main challenges of these analyses stem from universal properties of word frequencies, which show high temporal variability and are fat-tailed distributed. The later feature dramatically affects the statistical properties of entropy-based estimators, which motivates us to compare vocabularies using a generalized Jenson-Shannon divergence (obtained from entropies of order alpha).
Tales of Multiple Regression: Informative Missingness, Recommender Systems, and R2-D2
15:10 Fri 17 Aug, 2018 :: Napier 208 :: Prof Howard Bondell :: University of Melbourne

In this talk, we briefly discuss two projects tangentially related under the umbrella of high-dimensional regression. The first part of the talk investigates informative missingness in the framework of recommender systems. In this setting, we envision a potential rating for every object-user pair. The goal of a recommender system is to predict the unobserved ratings in order to recommend an object that the user is likely to rate highly. A typically overlooked piece is that the combinations are not missing at random. For example, in movie ratings, a relationship between the user ratings and their viewing history is expected, as human nature dictates the user would seek out movies that they anticipate enjoying. We model this informative missingness, and place the recommender system in a shared-variable regression framework which can aid in prediction quality. The second part of the talk deals with a new class of prior distributions for shrinkage regularization in sparse linear regression, particularly the high dimensional case. Instead of placing a prior on the coefficients themselves, we place a prior on the regression R-squared. This is then distributed to the coefficients by decomposing it via a Dirichlet Distribution. We call the new prior R2-D2 in light of its R-Squared Dirichlet Decomposition. Compared to existing shrinkage priors, we show that the R2-D2 prior can simultaneously achieve both high prior concentration at zero, as well as heavier tails. These two properties combine to provide a higher degree of shrinkage on the irrelevant coefficients, along with less bias in estimation of the larger signals.
Topological Data Analysis
15:10 Fri 31 Aug, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University

Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape - the way the data are distributed - permits high-level reasoning and modelling of the processes that created this data. The 0-th order aspect of shape is the number pieces: "connected components" to a topologist; "clustering" to a statistician. Higher-order topological aspects of shape are holes, quantified as "non-bounding cycles" in homology theory. These signal the existence of some type of constraint on the data-generating process. Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of "noisy" topological structure. Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection.
Exceptional quantum symmetries
11:10 Fri 5 Oct, 2018 :: Barr Smith South Polygon Lecture theatre :: Scott Morrison :: Australian National University

I will survey our current understanding of "quantum symmetries", the mathematical models of topological order, in particular through the formalism of fusion categories. Our very limited classification results to date point to nearly all examples being built out of data coming from finite groups, quantum groups at roots of unity, and cohomological data. However, there are a small number of "exceptional" quantum symmetries that so far appear to be disconnected from the world of classical symmetries as studied in representation theory and group theory. I'll give an update on recent progress understanding these examples.
Interactive theorem proving for mathematicians
15:10 Fri 5 Oct, 2018 :: Napier 208 :: A/Prof Scott Morrison :: Australian National University

Mathematicians use computers to write their proofs (LaTeX), and to do their calculations (Sage, Mathematica, Maple, Matlab, etc, as well as custom code for simulations or searches). However today we rarely use computers to help us to construct and understand proofs. There is a long tradition in computer science of interactive and automatic theorem proving; particularly today these are important tools in engineering correct software, as well as in optimisation and compilation. There have been some notable examples of formalisation of modern mathematics (e.g. the odd order theorem, the Kepler conjecture, and the four-colour theorem). Even in these cases, huge engineering efforts were required to translate the mathematics to a form a computer could understand. Moreover, in most areas of research there is a huge gap between the interests of human mathematicians and the abilities of computer provers. Nevertheless, I think it's time for mathematicians to start getting interested in interactive theorem provers! It's now possible to write proofs, and write tools that help write proofs, in languages which are expressive enough to encompass most of modern mathematics, and ergonomic enough to use for general purpose programming. I'll give an informal introduction to dependent type theory (the logical foundation of many modern theorem provers), some examples of doing mathematics in such a system, and my experiences working with mathematics students in these systems.
The role of microenvironment in regulation of cell infiltration and bortezomib-OV therapy in glioblastoma
15:10 Fri 11 Jan, 2019 :: IW 5.57 :: Professor Yangjin Kim :: Konkuk University, South Korea

Tumor microenvironment (TME) plays a critical role in regulation of tumor cell invasion in glioblastoma. Many microenvironmental factors such as extracllular matrix, microglia and astrocytes can either block or enhance this critical infiltration step in brain [4]. Oncolytic viruses such as herpes simplex virus-1 (oHSV) are genetically modified to target and kill cancer cells while not harming healthy normal cells and are currently under multiple clinical trials for safety and efficacy [1]. Bortezomib is a peptide-based proteasome inhibitor and is an FDA-approved drug for myeloma and mantle cell lymphoma. Yoo et al (2) have previously demonstrated that bortezomibinduced unfolded protein response (UPR) in many tumor cell lines (glioma, ovarian, and head and neck) up-regulated expression of heat shock protein 90 (HSP90), which then enhanced viral replication through promotion of nuclear localization of the viral polymerase in vitro. This led to synergistic tumor cell killing in vitro, and a combination treatment of mice with oHSV and bortezomib showed improved anti-tumor efficacy in vivo [2]. This combination therapy also increased the surface expression levels of NK cell activating markers and enhanced pro-inflammatory cytokine secretion. These findings demonstrated that the synergistic interaction between oHSV and bortezomib, a clinically relevant proteasome inhibitor, augments the cancer cell killing and promotes overall therapeutic efficacy. We investigated the role of NK cells in combination therapy with oncolytic virus (OV) and bortezomib. NK cells display rapid and potent immunity to metastasis and hematological cancers, and they overcome immunosuppressive effects of tumor microenvironment. We developed a mathematical model, a system of PDEs, in order to address the question of how the density of NK cells affects the growth of the tumor [3]. We found that the anti-tumor efficacy increases when the endogenous NKs are depleted, and also when exogenous NK cells are injected into the tumor. We also show that the TME plays a significant role in anti-tumor efficacy in OV combination therapy, and illustrate the effect of different spatial patterns of OV injection [5]. The results illustrate a possible phenotypic switch within tumor populations in a given microenvironment, and suggest new anti-invasion therapies. These predictions were validated by our in vivo and in vitro experiments. References 1]  Kanai R, … Rabkin SD, “Oncolytic herpes simplex virus vectors and chemotherapy: are combinatorial strategies more effective for cancer?”, Future Oncology, 6(4), 619–634, 2010. 
 [2]  Yoo J, et al., “Bortezomib-induced unfolded protein response increases oncolytic hsv-1 replication resulting in synergistic antitumor effect”, Clin Cancer Res , Vol. 20(14), 2014, pp. 3787-3798. 
 [3]  Yangjin Kim,..Balveen Kaur and Avner Friedman, “Complex role of NK cells in regulation of oncolytic virus-bortezomib therapy”, PNAS, 115 (19), pp. 4927-4932, 2018. 
 [4] Yangjin Kim, ..Sean Lawler, and Mark Chaplain, “Role of extracellular matrix and microenvironment in regulation of tumor growth and LAR-mediated invasion in glioblastoma”, PLoS One, 13(10):e0204865, 2018. 
 [5] Yangjin Kim, …, Hans G. Othmer, “Synergistic effects of bortezomib-OV therapy and anti-invasive
strategies in glioblastoma: A mathematical model”, Special issue, submitted, 2018.

Publications matching "Fibonacci: order, chaos, and the Holy Grail"

Topological chaos in flows on surfaces of arbitrary genus
Finn, Matthew; Thiffeault, J, XXII International Congress of Theoretical and Applied Mechanics, Adelaide 24/08/08
Topology of chaotic mixing patterns
Thiffeault, J; Finn, Matthew; Gouillart, E; Hall, T, Chaos 18 (033123-1–033123-16) 2008
Goodness-of-fit tests based on characterizations involving moments of order statistics
Morris, Kerwin; Szynal, D, International Journal of Pure and Applied Mathematics 38 (83–121) 2007
A comparison of two approaches to second-order subdifferentiability concepts with application to optimality conditions
Eberhard, A; Pearce, Charles, chapter in Applied optimization - Optimization and control with applications (Springer-Verlag) 35–100, 2005
First order characterization of Internet traffic matrices
Roughan, Matthew, 55th session of the International Statistics Institute (ISI), Sydney, NSW Australia 05/04/05
Forced solitary waves and fronts past submerged obstacles
Binder, Benjamin; Vanden-Broeck, J; Dias, F, Chaos 15 (37106-1–37106-13) 2005
Higher order accuracy in the gap-tooth scheme for large-scale dynamics using microscopic simulators
Roberts, Anthony John; Kevrekidis, I, The ANZIAM Journal 46 (C637–C657) 2005
A fundamental solution for linear second-order elliptic systems with variable coefficients
Clements, David, Journal of Engineering Mathematics 49 (209–216) 2004
Goodness-of-fit tests using dual versions of characterizations via moments of order statistics
Morris, Kerwin; Szynal, D, Journal of Mathematical Sciences 122 (3365–3383) 2004
On dual characterizations of continuous distributions in terms of expected values of two functions of order statistics and record values
Alinowska, I; Morris, Kerwin; Szynal, D, Journal of Mathematical Sciences 121 (2664–2673) 2004
Subquadrangles of order s of generalized quadrangles of order (s, s2), Part I
Brown, Matthew; Thas, J, Journal of Combinatorial Theory Series A 106 (15–32) 2004
Subquadrangles of order s of generalized quadrangles of order (s, s2), Part II
Brown, Matthew; Thas, J, Journal of Combinatorial Theory Series A 106 (33–48) 2004
Goodness-of-fit tests based on characterizations in terms of moments of order statistics
Morris, Kerwin; Szynal, D, Applicationes Mathematicae 29 (251–283) 2002
Higher-order statistical moments of wave-induced response of offshore structures via efficient sampling techniques
Najafian, G; Burrows, R; Tickell, R; Metcalfe, Andrew, International Offshore and Polar Engineering Conference 3 (465–470) 2002
An optimal filter of the second order
Torokhti, Anatoli; Howlett, P, IEEE Transactions on Signal Processing 49 (1044–1048) 2001
Brownian ratchets and Parrondo's games
Harmer, Gregory; Abbott, Derek; Taylor, Peter; Parrondo, J, Chaos 11 (705–714) 2001
Subquadrangles of generalized quadrangles of order (q2, q), q Even
O'Keefe, Christine; Penttila, T, Journal of Combinatorial Theory Series A 94 (218–229) 2001
Hadamard and Dragomir-Agarwal inequalities, higher-order convexity and the Euler formula
Dedio, L; Pearce, Charles; Peoario, J, Journal of the Korean Mathematical Society (–) 2001

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.