The University of Adelaide
You are here
Text size: S | M | L
Printer Friendly Version
August 2019

Search the School of Mathematical Sciences

Find in People Courses Events News Publications

Courses matching "Can statisticians do better than random guessing"

Random Processes III

This course introduces students to the fundamental concepts of random processes, particularly continuous-time Markov chains, and related structures. These are the essential building blocks of any random system, be it a telecommunications network, a hospital waiting list or a transport system. They also arise in many other environments, where you wish to capture the development of some element of random behaviour over time, such as the state of the surrounding environment. Topics covered are: Continuous-time Markov-chains: definition and basic properties, transient behaviour, the stationary distribution, hitting probabilities and expected hitting times, reversibility; Basic Queueing Theory: arrival processes, service time distributions, Little's Law; Point Processes: Poisson process, properties and generalisations; Renewal Processes: preliminaries, renewal function, renewal theory and applications, stationary and delayed renewal processes; Queueing Networks: Kendall's notation, Jackson networks, mean value analysis; Loss Networks: truncated reversible processes, circuit-switched networks, reduced load approximations.

More about this course...

Events matching "Can statisticians do better than random guessing"

Statistical Critique of the International Panel on Climate Change's work on Climate Change.
18:00 Wed 17 Oct, 2007 :: Union Hall University of Adelaide :: Mr Dennis Trewin

Climate change is one of the most important issues facing us today. Many governments have introduced or are developing appropriate policy interventions to (a) reduce the growth of greenhouse gas emissions in order to mitigate future climate change, or (b) adapt to future climate change. This important work deserves a high quality statistical data base but there are statistical shortcomings in the work of the International Panel on Climate Change (IPCC). There has been very little involvement of qualified statisticians in the very important work of the IPCC which appears to be scientifically meritorious in most other ways. Mr Trewin will explain these shortcomings and outline his views on likely future climate change, taking into account the statistical deficiencies. His conclusions suggest climate change is still an important issue that needs to be addressed but the range of likely outcomes is a lot lower than has been suggested by the IPCC. This presentation will be based on an invited paper presented at the OECD World Forum.
Counting fish
13:10 Wed 19 Mar, 2008 :: Napier 210 :: Mr Jono Tuke

How often have you asked yourself: "I wonder how many fish are in that lake?" Probably never, but if you ever did, then this is the lecture for you. The solution is easy (Seuss, 1960), but raises the question of how good the answer is. I will answer this by looking at confidence intervals. In the lecture, I will discuss what a confidence interval is and how to calculate it using techniques for calculating probabilities in poker. I will also look at how these ideas have been used in epidemiology, the study of disease, to estimate the number of people with diabetes. [1] Seuss, Dr. (1960). "One Fish Two Fish Red Fish Blue Fish". Random House Books.
Elliptic equation for diffusion-advection flows
15:10 Fri 15 Aug, 2008 :: G03 Napier Building University of Adelaide :: Prof. Pavel Bedrikovsetsky :: Australian School of Petroleum Science, University of Adelaide.

The standard diffusion equation is obtained by Einstein's method and its generalisation, Fokker-Plank-Kolmogorov-Feller theory. The time between jumps in Einstein derivation is constant.

We discuss random walks with residence time distribution, which occurs for flows of solutes and suspensions/colloids in porous media, CO2 sequestration in coal mines, several processes in chemical, petroleum and environmental engineering. The rigorous application of the Einstein's method results in new equation, containing the time and the mixed dispersion terms expressing the dispersion of the particle time steps.

Usually, adding the second time derivative results in additional initial data. For the equation derived, the condition of limited solution when time tends to infinity provides with uniqueness of the Caushy problem solution.

The solution of the pulse injection problem describing a common tracer injection experiment is studied in greater detail. The new theory predicts delay of the maximum of the tracer, compared to the velocity of the flow, while its forward "tail" contains much more particles than in the solution of the classical parabolic (advection-dispersion) equation. This is in agreement with the experimental observations and predictions of the direct simulation.

Multi-scale tools for interpreting cell biology data
15:10 Fri 17 Apr, 2009 :: Napier LG29 :: Dr Matthew Simpson :: University of Melbourne

Trajectory data from observations of a random walk process are often used to characterize macroscopic transport coefficients and to infer motility mechanisms in cell biology. New continuum equations describing the average moments of the position of an individual agent in a population of interacting agents are derived and validated. Unlike standard noninteracting random walks, the new moment equations explicitly represent the interactions between agents as they are coupled to the macroscopic agent density. Key issues associated with the validity of the new continuum equations and the interpretation of experimental data will be explored.
Predicting turbulence
12:10 Wed 12 Aug, 2009 :: Napier 210 :: Dr Trent Mattner :: University of Adelaide

Turbulence is characterised by three-dimensional unsteady fluid motion over a wide range of spatial and temporal scales. It is important in many problems of technological and scientific interest, such as drag reduction, energy production and climate prediction. In this talk, I will explain why turbulent flows are difficult to predict and describe a modern mathematical model of turbulence based on a random collection of fluid vortices.
Random walk integrals
13:10 Fri 16 Apr, 2010 :: School Board Room :: Prof Jonathan Borwein :: University of Newcastle

Following Pearson in 1905, we study the expected distance of a two-dimensional walk in the plane with unit steps in random directions---what Pearson called a "ramble". A series evaluation and recursions are obtained making it possible to explicitly determine this distance for small number of steps. Closed form expressions for all the moments of a 2-step and a 3-step walk are given, and a formula is conjectured for the 4-step walk. Heavy use is made of the analytic continuation of the underlying integral.
A spatial-temporal point process model for fine resolution multisite rainfall data from Roma, Italy
14:10 Thu 19 Aug, 2010 :: Napier G04 :: A/Prof Paul Cowpertwait :: Auckland University of Technology

A point process rainfall model is further developed that has storm origins occurring in space-time according to a Poisson process. Each storm origin has a random radius so that storms occur as circular regions in two-dimensional space, where the storm radii are taken to be independent exponential random variables. Storm origins are of random type z, where z follows a continuous probability distribution. Cell origins occur in a further spatial Poisson process and have arrival times that follow a Neyman-Scott point process. Cell origins have random radii so that cells form discs in two-dimensional space. Statistical properties up to third order are derived and used to fit the model to 10 min series taken from 23 sites across the Roma region, Italy. Distributional properties of the observed annual maxima are compared to equivalent values sampled from series that are simulated using the fitted model. The results indicate that the model will be of use in urban drainage projects for the Roma region.
Principal Component Analysis Revisited
15:10 Fri 15 Oct, 2010 :: Napier G04 :: Assoc. Prof Inge Koch :: University of Adelaide

Since the beginning of the 20th century, Principal Component Analysis (PCA) has been an important tool in the analysis of multivariate data. The principal components summarise data in fewer than the original number of variables without losing essential information, and thus allow a split of the data into signal and noise components. PCA is a linear method, based on elegant mathematical theory. The increasing complexity of data together with the emergence of fast computers in the later parts of the 20th century has led to a renaissance of PCA. The growing numbers of variables (in particular, high-dimensional low sample size problems), non-Gaussian data, and functional data (where the data are curves) are posing exciting challenges to statisticians, and have resulted in new research which extends the classical theory. I begin with the classical PCA methodology and illustrate the challenges presented by the complex data that we are now able to collect. The main part of the talk focuses on extensions of PCA: the duality of PCA and the Principal Coordinates of Multidimensional Scaling, Sparse PCA, and consistency results relating to principal components, as the dimension grows. We will also look at newer developments such as Principal Component Regression and Supervised PCA, nonlinear PCA and Functional PCA.
Statistical physics and behavioral adaptation to Creation's main stimuli: sex and food
15:10 Fri 29 Oct, 2010 :: E10 B17 Suite 1 :: Prof Laurent Seuront :: Flinders University and South Australian Research and Development Institute

Animals typically search for food and mates, while avoiding predators. This is particularly critical for keystone organisms such as intertidal gastropods and copepods (i.e. millimeter-scale crustaceans) as they typically rely on non-visual senses for detecting, identifying and locating mates in their two- and three-dimensional environments. Here, using stochastic methods derived from the field of nonlinear physics, we provide new insights into the nature (i.e. innate vs. acquired) of the motion behavior of gastropods and copepods, and demonstrate how changes in their behavioral properties can be used to identify the trade-offs between foraging for food or sex. The gastropod Littorina littorea hence moves according to fractional Brownian motions while foraging for food (in accordance with the fractal nature of food distributions), and switch to Brownian motion while foraging for sex. In contrast, the swimming behavior of the copepod Temora longicornis belongs to the class of multifractal random walks (MRW; i.e. a form of anomalous diffusion), characterized by a nonlinear moment scaling function for distance versus time. This clearly differs from the traditional Brownian and fractional Brownian walks expected or previously detected in animal behaviors. The divergence between MRW and Levy flight and walk is also discussed, and it is shown how copepod anomalous diffusion is enhanced by the presence and concentration of conspecific water-borne signals, and is dramatically increasing male-female encounter rates.
Queues with skill based routing under FCFS–ALIS regime
15:10 Fri 11 Feb, 2011 :: B17 Ingkarni Wardli :: Prof Gideon Weiss :: The University of Haifa, Israel

We consider a system where jobs of several types are served by servers of several types, and a bipartite graph between server types and job types describes feasible assignments. This is a common situation in manufacturing, call centers with skill based routing, matching of parent-child in adoption or matching in kidney transplants etc. We consider the case of first come first served policy: jobs are assigned to the first available feasible server in order of their arrivals. We consider two types of policies for assigning customers to idle servers - a random assignment and assignment to the longest idle server (ALIS) We survey some results for four different situations:

  • For a loss system we find conditions for reversibility and insensitivity.
  • For a manufacturing type system, in which there is enough capacity to serve all jobs, we discuss a product form solution and waiting times.
  • For an infinite matching model in which an infinite sequence of customers of IID types, and infinite sequence of servers of IID types are matched according to first come first, we obtain a product form stationary distribution for this system, which we use to calculate matching rates.
  • For a call center model with overload and abandonments we make some plausible observations.

This talk surveys joint work with Ivo Adan, Rene Caldentey, Cor Hurkens, Ed Kaplan and Damon Wischik, as well as work by Jeremy Visschers, Rishy Talreja and Ward Whitt.

Classification for high-dimensional data
15:10 Fri 1 Apr, 2011 :: Conference Room Level 7 Ingkarni Wardli :: Associate Prof Inge Koch :: The University of Adelaide

For two-class classification problems Fisher's discriminant rule performs well in many scenarios provided the dimension, d, is much smaller than the sample size n. As the dimension increases, Fisher's rule may no longer be adequate, and can perform as poorly as random guessing. In this talk we look at new ways of overcoming this poor performance for high-dimensional data by suitably modifying Fisher's rule, and in particular we describe the 'Features Annealed Independence Rule (FAIR)? of Fan and Fan (2008) and a rule based on canonical correlation analysis. I describe some theoretical developments, and also show analysis of data which illustrate the performance of these modified rule.
How to value risk
12:10 Mon 11 Apr, 2011 :: 5.57 Ingkarni Wardli :: Leo Shen :: University of Adelaide

A key question in mathematical finance is: given a future random payoff X, what is its value today? If X represents a loss, one can ask how risky is X. To mitigate risk it must be modelled and quantified. The finance industry has used Value-at-Risk and conditional Value-at-Risk as measures. However, these measures are not time consistent and Value-at-Risk can penalize diversification. A modern theory of risk measures is being developed which is related to solutions of backward stochastic differential equations in continuous time and stochastic difference equations in discrete time. I first review risk measures used in mathematical finance, including static and dynamic risk measures. I recall results relating to backward stochastic difference equations (BSDEs) associated with a single jump process. Then I evaluate some numerical examples of the solutions of the backward stochastic difference equations and related risk measures. These concepts are new. I hope the examples will indicate how they might be used.
Priority queueing systems with random switchover times and generalisations of the Kendall-Takacs equation
16:00 Wed 1 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

In this talk I will review existing analytical results for priority queueing systems with Poisson incoming flows, general service times and a single server which needs some (random) time to switch between requests of different priority. Specifically, I will discuss analytical results for the busy period and workload of such systems with a special structure of switchover times. The results related to the busy period can be seen as generalisations of the famous Kendall-Tak\'{a}cs functional equation for $M|G|1$: being formulated in terms of Laplace-Stieltjes transform, they represent systems of functional recurrent equations. I will present a methodology and algorithms of their numerical solution; the efficiency of these algorithms is achieved by acceleration of the numerical procedure of solving the classical Kendall-Tak\'{a}cs equation. At the end I will identify open problems with regard to such systems; these open problems are mainly related to the modelling of switchover times.
Inference and optimal design for percolation and general random graph models (Part I)
09:30 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Inference and optimal design for percolation and general random graph models (Part II)
10:50 Wed 8 Jun, 2011 :: 7.15 Ingkarni Wardli :: Dr Andrei Bejan :: The University of Cambridge

The problem of optimal arrangement of nodes of a random weighted graph is discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function is assumed to depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and thus to extract as much information about it as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs.

We adopt a utility-based Bayesian framework to tackle the optimal design problem for random graphs of this kind. Simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, are used to obtain the solution. We study optimal design problem for the inference based on partial observations of random graphs by employing data augmentation technique. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions.

We consider inference and optimal design problems for finite clusters from bond percolation on the integer lattice $\mathbb{Z}^d$ and derive a range of both numerical and analytical results for these graphs. We introduce inner-outer plots by deleting some of the lattice nodes and show that the ëmostly populatedí designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

Stochastic models of reaction diffusion
15:10 Fri 17 Jun, 2011 :: 7.15 Ingkarni Wardli :: Prof Jon Chapman :: Oxford University

We consider two different position jump processes: (i) a random walk on a lattice (ii) the Euler scheme for the Smoluchowski differential equation. Both of these reduce to the diffusion equation as the time step and size of the jump tend to zero. We consider the problem of adding chemical reactions to these processes, both at a surface and in the bulk. We show how the "microscopic" parameters should be chosen to achieve the correct "macroscopic" reaction rate. This choice is found to depend on which stochastic model for diffusion is used.
The real thing
12:10 Wed 3 Aug, 2011 :: Napier 210 :: Dr Paul McCann :: School of Mathematical Sciences

Let x be a real number. This familiar and seemingly innocent assumption opens up a world of infinite variety and information. We use some simple techniques (powers of two, geometric series) to examine some interesting consequences of generating random real numbers, and encounter both the best flash drive and the worst flash drive you will ever meet. Come "hold infinity in the palm of your hand", and contemplate eternity for about half an hour. Almost nothing is assumed, almost everything is explained, and absolutely all are welcome.
Dealing with the GC-content bias in second-generation DNA sequence data
15:10 Fri 12 Aug, 2011 :: Horace Lamb :: Prof Terry Speed :: Walter and Eliza Hall Institute

The field of genomics is currently dealing with an explosion of data from so-called second-generation DNA sequencing machines. This is creating many challenges and opportunities for statisticians interested in the area. In this talk I will outline the technology and the data flood, and move on to one particular problem where the technology is used: copy-number analysis. There we find a novel bias, which, if not dealt with properly, can dominate the signal of interest. I will describe how we think about and summarize it, and go on to identify a plausible source of this bias, leading up to a way of removing it. Our approach makes use of the total variation metric on discrete measures, but apart from this, is largely descriptive.
Can statisticians do better than random guessing?
12:10 Tue 20 Sep, 2011 :: Napier 210 :: A/Prof Inge Koch :: School of Mathematical Sciences

In the finance or credit risk area, a bank may want to assess whether a client is going to default, or be able to meet the repayments. In the assessment of benign or malignant tumours, a correct diagnosis is required. In these and similar examples, we make decisions based on data. The classical t-tests provide a tool for making such decisions. However, many modern data sets have more variables than observations, and the classical rules may not be any better than random guessing. We consider Fisher's rule for classifying data into two groups, and show that it can break down for high-dimensional data. We then look at ways of overcoming some of the weaknesses of the classical rules, and I show how these "post-modern" rules perform in practice.
Likelihood-free Bayesian inference: modelling drug resistance in Mycobacterium tuberculosis
15:10 Fri 21 Oct, 2011 :: 7.15 Ingkarni Wardli :: Dr Scott Sisson :: University of New South Wales

A central pillar of Bayesian statistical inference is Monte Carlo integration, which is based on obtaining random samples from the posterior distribution. There are a number of standard ways to obtain these samples, provided that the likelihood function can be numerically evaluated. In the last 10 years, there has been a substantial push to develop methods that permit Bayesian inference in the presence of computationally intractable likelihood functions. These methods, termed ``likelihood-free'' or approximate Bayesian computation (ABC), are now being applied extensively across many disciplines. In this talk, I'll present a brief, non-technical overview of the ideas behind likelihood-free methods. I'll motivate and illustrate these ideas through an analysis of the epidemiological fitness cost of drug resistance in Mycobacterium tuberculosis.
Mixing, dynamics, and probability
15:10 Fri 2 Mar, 2012 :: B.21 Ingkarni Wardli :: A/Prof Gary Froyland :: University of New South Wales

Many interesting natural phenomena are hard to predict. When modelled as a dynamical system, this unpredictability is often the result of rapid separation of nearby trajectories. Viewing the dynamics as acting on a probability measure, the mixing property states that two measurements (or random variables), evaluated at increasingly separated times, become independent in the time-separation limit. Thus, the later measurement becomes increasingly difficult to predict, given the outcome of the earlier measurement. If this approach to independence occurs exponentially quickly in time, one can profitably use linear operator tools to analyse the dynamics. I will give an overview of these techniques and show how they can be applied to answer mathematical questions, describe observed behaviour in fluid mixing, and analyse models of the ocean and atmosphere.
Revenge of the undead statistician part II
13:10 Tue 24 Apr, 2012 :: 7.15 Ingkarni Wardli :: Mr Jono Tuke :: School of Mathematical Sciences

If you only go to one undergraduate seminar this year, then you should have gone to Jim Denier's - it was cracking, but if you decide to go to another, then this one has cholera, Bayesian statistics, random networks and zombies. Warning: may contain an overuse of pop culture references to motivate an interest in statistics.
Evaluation and comparison of the performance of Australian and New Zealand intensive care units
14:10 Fri 25 May, 2012 :: 7.15 Ingkarni Wardli :: Dr Jessica Kasza :: The University of Adelaide

Recently, the Australian Government has emphasised the need for monitoring and comparing the performance of Australian hospitals. Evaluating the performance of intensive care units (ICUs) is of particular importance, given that the most severe cases are treated in these units. Indeed, ICU performance can be thought of as a proxy for the overall performance of a hospital. We compare the performance of the ICUs contributing to the Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database, the largest of its kind in the world, and identify those ICUs with unusual performance. It is well-known that there are many statistical issues that must be accounted for in the evaluation of healthcare provider performance. Indicators of performance must be appropriately selected and estimated, investigators must adequately adjust for casemix, statistical variation must be fully accounted for, and adjustment for multiple comparisons must be made. Our basis for dealing with these issues is the estimation of a hierarchical logistic model for the in-hospital death of each patient, with patients clustered within ICUs. Both patient- and ICU-level covariates are adjusted for, with a random intercept and random coefficient for the APACHE III severity score. Given that we expect most ICUs to have similar performance after adjustment for these covariates, we follow Ohlssen et al., JRSS A (2007), and estimate a null model that we expect the majority of ICUs to follow. This methodology allows us to rigorously account for the aforementioned statistical issues, and accurately identify those ICUs contributing to the ANZICS database that have comparatively unusual performance. This is joint work with Prof. Patty Solomon and Assoc. Prof. John Moran.
Continuous random walk models for solute transport in porous media
15:10 Fri 17 Aug, 2012 :: B.21 Ingkarni Wardli :: Prof Pavel Bedrikovetski :: The University of Adelaide

The classical diffusion (thermal conductivity) equation was derived from the Master random walk equation and is parabolic. The main assumption was a probabilistic distribution of the jump length while the jump time is constant. Distribution of the jump time along with the jump length adds the second time derivative into the averaged equations, but the equation becomes ... elliptic! Where from to take an extra initial condition? We discuss how to pose the well-posed flow problem, exact 1d solution and numerous engineering applications. This is joint work with A. Shapiro and H. Yuan.
Wave propagation in disordered media
15:10 Fri 31 Aug, 2012 :: B.21 Ingkarni Wardli :: Dr Luke Bennetts :: The University of Adelaide

Problems involving wave propagation through systems composed of arrays of scattering sources embedded in some background medium will be considered. For example, in a fluids setting, the background medium is the open ocean surface and the scatterers are floating bodies, such as wave energy devices. Waves propagate in very different ways if the system is structured or disordered. If the disorder is random the problem is to determine the `effective' wave propagation properties by considering the ensemble average over all possible realisations of the system. I will talk about semi-analytical (i.e. low numerical cost) approaches to determining the effective properties.
Principal Component Analysis (PCA)
12:30 Mon 3 Sep, 2012 :: B.21 Ingkarni Wardli :: Mr Lyron Winderbaum :: University of Adelaide

Principal Component Analysis (PCA) has become something of a buzzword recently in a number of disciplines including the gene expression and facial recognition. It is a classical, and fundamentally simple, concept that has been around since the early 1900's, its recent popularity largely due to the need for dimension reduction techniques in analyzing high dimensional data that has become more common in the last decade, and the availability of computing power to implement this. I will explain the concept, prove a result, and give a couple of examples. The talk should be accessible to all disciplines as it (should?) only assume first year linear algebra, the concept of a random variable, and covariance.
Numerical Free Probability: Computing Eigenvalue Distributions of Algebraic Manipulations of Random Matrices
15:10 Fri 2 Nov, 2012 :: B.20 Ingkarni Wardli :: Dr Sheehan Olver :: The University of Sydney

Suppose that the global eigenvalue distributions of two large random matrices A and B are known. It is a remarkable fact that, generically, the eigenvalue distribution of A + B and (if A and B are positive definite) A*B are uniquely determined from only the eigenvalue distributions of A and B; i.e., no information about eigenvectors are required. These operations on eigenvalue distributions are described by free probability theory. We construct a numerical toolbox that can efficiently and reliably calculate these operations with spectral accuracy, by exploiting the complex analytical framework that underlies free probability theory.
Asymptotic independence of (simple) two-dimensional Markov processes
15:10 Fri 1 Mar, 2013 :: B.18 Ingkarni Wardli :: Prof Guy Latouche :: Universite Libre de Bruxelles

The one-dimensional birth-and death model is one of the basic processes in applied probability but difficulties appear as one moves to higher dimensions. In the positive recurrent case, the situation is singularly simplified if the stationary distribution has product-form. We investigate the conditions under which this property holds, and we show how to use the knowledge to find product-form approximations for otherwise unmanageable random walks. This is joint work with Masakiyo Miyazawa and Peter Taylor.
On the chromatic number of a random hypergraph
13:10 Fri 22 Mar, 2013 :: Ingkarni Wardli B21 :: Dr Catherine Greenhill :: University of New South Wales

A hypergraph is a set of vertices and a set of hyperedges, where each hyperedge is a subset of vertices. A hypergraph is r-uniform if every hyperedge contains r vertices. A colouring of a hypergraph is an assignment of colours to vertices such that no hyperedge is monochromatic. When the colours are drawn from the set {1,..,k}, this defines a k-colouring. We consider the problem of k-colouring a random r-uniform hypergraph with n vertices and cn edges, where k, r and c are constants and n tends to infinity. In this setting, Achlioptas and Naor showed that for the case of r = 2, the chromatic number of a random graph must have one of two easily computable values as n tends to infinity. I will describe some joint work with Martin Dyer (Leeds) and Alan Frieze (Carnegie Mellon), in which we generalised this result to random uniform hypergraphs. The argument uses the second moment method, and applies a general theorem for performing Laplace summation over a lattice. So the proof contains something for everyone, with elements from combinatorics, analysis and algebra.
14:10 Mon 20 May, 2013 :: 7.15 Ingkarni Wardli :: A/Prof. Robb Muirhead :: School of Mathematical Sciences

This is a lighthearted (some would say content-free) talk about coincidences, those surprising concurrences of events that are often perceived as meaningfully related, with no apparent causal connection. Time permitting, it will touch on topics like:
Patterns in data and the dangers of looking for patterns, unspecified ahead of time, and trying to "explain" them; e.g. post hoc subgroup analyses, cancer clusters, conspiracy theories ...
Matching problems; e.g. the birthday problem and extensions
People who win a lottery more than once -- how surprised should we really be? What's the question we should be asking?
When you become familiar with a new word, and see it again soon afterwards, how surprised should you be?
Caution: This is a shortened version of a talk that was originally prepared for a group of non-mathematicians and non-statisticians, so it's mostly non-technical. It probably does not contain anything you don't already know -- it will be an amazing coincidence if it does!
K-theory and solid state physics
12:10 Fri 13 Sep, 2013 :: Ingkarni Wardli B19 :: Dr Keith Hannabuss :: Balliol College, Oxford

More than 50 years ago Dyson showed that there is a nine-fold classification of random matrix models, the classes of which are each associated with Riemannian symmetric spaces. More recently it was realised that a related argument enables one to classify the insulating properties of fermionic systems (with the addition of an extra class to give 10 in all), and can be described using K-theory. In this talk I shall give a survey of the ideas, and a brief outline of work with Guo Chuan Thiang.
Random Wanderings on a Sphere...
11:10 Tue 17 Sep, 2013 :: Ingkarni Wardli Level 5 Room 5.57 :: A/Prof Robb Muirhead :: University of Adelaide

This will be a short talk (about 30 minutes) about the following problem. (Even if I tell you all I know about it, it won't take very long!) Imagine the earth is a unit sphere in 3-dimensions. You're standing at a fixed point, which we may as well take to be the North Pole. Suddenly you get moved to another point on the sphere by a random (uniform) orthogonal transormation. Where are you now? You're not at a point which is uniformly distributed on the surface of the sphere (so, since most of the earth's surface is water, you're probably drowning). But then you get moved again by the same orthogonal transformation. Where are you now? And what happens to your location it this happens repeatedly? I have only a partial answwer to this question, for 2 and 3 transformations. (There's nothing special about 3 dimensions here--results hold for all dimensions which are at least 3.) I don't know of any statistical application for this! This work was motivated by a talk I heard, given by Tom Marzetta (Bell Labs) at a conference at MIT. Although I know virtually nothing about signal processing, I gather Marzetta was trying to encode signals using powers of ranfom orthogonal matrices. After carrying out simulations, I think he decided it wasn't a good idea.
All at sea with spectral analysis
11:10 Tue 19 Nov, 2013 :: Ingkarni Wardli Level 5 Room 5.56 :: A/Prof Andrew Metcalfe :: The University of Adelaide

The steady state response of a single degree of freedom damped linear stystem to a sinusoidal input is a sinusoidal function at the same frequency, but generally with a different amplitude and a phase shift. The analogous result for a random stationary input can be described in terms of input and response spectra and a transfer function description of the linear system. The practical use of this result is that the parameters of a linear system can be estimated from the input and response spectra, and the response spectrum can be predicted if the transfer function and input spectrum are known. I shall demonstrate these results with data from a small ship in the North Sea. The results from the sea trial raise the issue of non-linearity, and second order amplitude response functons are obtained using auto-regressive estimators. The possibility of using wavelets rather than spectra is consedred in the context of single degree of freedom linear systems. Everybody welcome to attend. Please not a change of venue - we will be in room 5.56
A few flavours of optimal control of Markov chains
11:00 Thu 12 Dec, 2013 :: B18 :: Dr Sam Cohen :: Oxford University

In this talk we will outline a general view of optimal control of a continuous-time Markov chain, and how this naturally leads to the theory of Backward Stochastic Differential Equations. We will see how this class of equations gives a natural setting to study these problems, and how we can calculate numerical solutions in many settings. These will include problems with payoffs with memory, with random terminal times, with ergodic and infinite-horizon value functions, and with finite and infinitely many states. Examples will be drawn from finance, networks and electronic engineering.
Ergodicity and loss of capacity: a stochastic horseshoe?
15:10 Fri 9 May, 2014 :: B.21 Ingkarni Wardli :: Professor Ami Radunskaya :: Pomona College, the United States of America

Random fluctuations of an environment are common in ecological and economical settings. The resulting processes can be described by a stochastic dynamical system, where a family of maps parametrized by an independent, identically distributed random variable forms the basis for a Markov chain on a continuous state space. Random dynamical systems are a beautiful combination of deterministic and random processes, and they have received considerable interest since von Neuman and Ulam's seminal work in the 1940's. Key questions in the study of a stochastic dynamical system are: does the system have a well-defined average, i.e. is it ergodic? How does this long-term behavior compare to that of the state variable in a constant environment with the averaged parameter? In this talk we answer these questions for a family of maps on the unit interval that model self-limiting growth. The techniques used can be extended to study other families of concave maps, and so we conjecture the existence of a "stochastic horseshoe".
A Random Walk Through Discrete State Markov Chain Theory
12:10 Mon 22 Sep, 2014 :: B.19 Ingkarni Wardli :: James Walker :: University of Adelaide

This talk will go through the basics of Markov chain theory; including how to construct a continuous-time Markov chain (CTMC), how to adapt a Markov chain to include non-memoryless distributions, how to simulate CTMC's and some key results.
Spectral asymptotics on random Sierpinski gaskets
12:10 Fri 26 Sep, 2014 :: Ingkarni Wardli B20 :: Uta Freiberg :: Universitaet Stuttgart

Self similar fractals are often used in modeling porous media. Hence, defining a Laplacian and a Brownian motion on such sets describes transport through such materials. However, the assumption of strict self similarity could be too restricting. So, we present several models of random fractals which could be used instead. After recalling the classical approaches of random homogenous and recursive random fractals, we show how to interpolate between these two model classes with the help of so called V-variable fractals. This concept (developed by Barnsley, Hutchinson & Stenflo) allows the definition of new families of random fractals, hereby the parameter V describes the degree of `variability' of the realizations. We discuss how the degree of variability influences the geometric, analytic and stochastic properties of these sets. - These results have been obtained with Ben Hambly (University of Oxford) and John Hutchinson (ANU Canberra).
Optimally Chosen Quadratic Forms for Partitioning Multivariate Data
13:10 Tue 14 Oct, 2014 :: Ingkarni Wardli 715 Conference Room :: Assoc. Prof. Inge Koch :: School of Mathematical Sciences

Quadratic forms are commonly used in linear algebra. For d-dimensional vectors they have a matrix representation, Q(x) = x'Ax, for some symmetric matrix A. In statistics quadratic forms are defined for d-dimensional random vectors, and one of the best-known quadratic forms is the Mahalanobis distance of two random vectors. In this talk we want to partition a quadratic form Q(X) = X'MX, where X is a random vector, and M a symmetric matrix, that is, we want to find a d-dimensional random vector W such that Q(X) = W'W. This problem has many solutions. We are interested in a solution or partition W of X such that pairs of corresponding variables (X_j, W_j) are highly correlated and such that W is simpler than the given X. We will consider some natural candidates for W which turn out to be suboptimal in the sense of the above constraints, and we will then exhibit the optimal solution. Solutions of this type are useful in the well-known T-square statistic. We will see in examples what these solutions look like.
Complex Systems, Chaotic Dynamics and Infectious Diseases
15:10 Fri 5 Jun, 2015 :: Engineering North N132 :: Prof Michael Small :: UWA

In complex systems, the interconnection between the components of the system determine the dynamics. The system is described by a very large and random mathematical graph and it is the topological structure of that graph which is important for understanding of the dynamical behaviour of the system. I will talk about two specific examples - (1) spread of infectious disease (where the connection between the agents in a population, rather than epidemic parameters, determine the endemic state); and, (2) a transformation to represent a dynamical system as a graph (such that the "statistical mechanics" of the graph characterise the dynamics).
How predictable are you? Information and happiness in social media.
12:10 Mon 21 Mar, 2016 :: Ingkarni Wardli Conference Room 715 :: Dr Lewis Mitchell :: School of Mathematical Sciences

The explosion of ``Big Data'' coming from online social networks and the like has opened up the new field of ``computational social science'', which applies a quantitative lens to problems traditionally in the domain of psychologists, anthropologists and social scientists. What does it mean to be influential? How do ideas propagate amongst populations? Is happiness contagious? For the first time, mathematicians, statisticians, and computer scientists can provide insight into these and other questions. Using data from social networks such as Facebook and Twitter, I will give an overview of recent research trends in computational social science, describe some of my own work using techniques like sentiment analysis and information theory in this realm, and explain how you can get involved with this highly rewarding research field as well.
Predicting turbulence
14:10 Tue 30 Aug, 2016 :: Napier 209 :: Dr Trent Mattner :: School of Mathematical Sciences

Turbulence is characterised by three-dimensional unsteady fluid motion over a wide range of spatial and temporal scales. It is important in many problems of technological and scientific interest, such as drag reduction, energy production and climate prediction. Turbulent flows are governed by the Navier--Stokes equations, which are a nonlinear system of partial differential equations. Typically, numerical methods are needed to find solutions to these equations. In turbulent flows, however, the resulting computational problem is usually intractable. Filtering or averaging the Navier--Stokes equations mitigates the computational problem, but introduces new quantities into the equations. Mathematical models of turbulence are needed to estimate these quantities. One promising turbulence model consists of a random collection of fluid vortices, which are themselves approximate solutions of the Navier--Stokes equations.
SIR epidemics with stages of infection
12:10 Wed 28 Sep, 2016 :: EM218 :: Matthieu Simon :: Universite Libre de Bruxelles

This talk is concerned with a stochastic model for the spread of an epidemic in a closed homogeneously mixing population. The population is subdivided into three classes of individuals: the susceptibles, the infectives and the removed cases. In short, an infective remains infectious during a random period of time. While infected, it can contact all the susceptibles present, independently of the other infectives. At the end of the infectious period, it becomes a removed case and has no further part in the infection process.

We represent an infectious period as a set of different stages that an infective can go through before being removed. The transitions between stages are ruled by either a Markov process or a semi-Markov process. In each stage, an infective makes contaminations at the epochs of a Poisson process with a specific rate.

Our purpose is to derive closed expressions for a transform of different statistics related to the end of the epidemic, such as the final number of susceptibles and the area under the trajectories of all the infectives. The analysis is performed by using simple matrix analytic methods and martingale arguments. Numerical illustrations will be provided at the end of the talk.
Tales of Multiple Regression: Informative Missingness, Recommender Systems, and R2-D2
15:10 Fri 17 Aug, 2018 :: Napier 208 :: Prof Howard Bondell :: University of Melbourne

In this talk, we briefly discuss two projects tangentially related under the umbrella of high-dimensional regression. The first part of the talk investigates informative missingness in the framework of recommender systems. In this setting, we envision a potential rating for every object-user pair. The goal of a recommender system is to predict the unobserved ratings in order to recommend an object that the user is likely to rate highly. A typically overlooked piece is that the combinations are not missing at random. For example, in movie ratings, a relationship between the user ratings and their viewing history is expected, as human nature dictates the user would seek out movies that they anticipate enjoying. We model this informative missingness, and place the recommender system in a shared-variable regression framework which can aid in prediction quality. The second part of the talk deals with a new class of prior distributions for shrinkage regularization in sparse linear regression, particularly the high dimensional case. Instead of placing a prior on the coefficients themselves, we place a prior on the regression R-squared. This is then distributed to the coefficients by decomposing it via a Dirichlet Distribution. We call the new prior R2-D2 in light of its R-Squared Dirichlet Decomposition. Compared to existing shrinkage priors, we show that the R2-D2 prior can simultaneously achieve both high prior concentration at zero, as well as heavier tails. These two properties combine to provide a higher degree of shrinkage on the irrelevant coefficients, along with less bias in estimation of the larger signals.
Random walks
15:10 Fri 12 Oct, 2018 :: Napier 208 :: A/Prof Kais Hamza :: Monash University

A random walk is arguably the most basic stochastic process one can define. It is also among the most intuitive objects in the theory of probability and stochastic processes. For these and other reasons, it is one of the most studied processes or rather family of processes, finding applications in all areas of science, technology and engineering. In this talk, I will start by recalling some of the classical results for random walks and then discuss some of my own recent explorations in this area of research that has maintained relevance for decades.

News matching "Can statisticians do better than random guessing"

Recent PhD's
At the December graduation ceremony Mr Raymond Kennington was awarded the degree of Doctor of Philosophy for his thesis entitled "Random allocations: new and extended models and techniques with applications and numerics". Congratulations to Ray and his supervisor, Professor Charles Pearce. Posted Wed 2 Jan 08.

Publications matching "Can statisticians do better than random guessing"

A high resolution large-scale gaussian random field rainfall model for Australian monthly rainfall
Osti, Alexander; Leonard, Michael; Lambert, Martin; Metcalfe, Andrew, Water Down Under 2008, Adelaide 14/04/08
A temporally heterogeneous high-resolution large-scale gaussian random field model for Australian rainfall
Osti, Alexander; Leonard, Michael; Lambert, Martin; Metcalfe, Andrew, 17th IASTED International Conference on Applied Simulation and Modelling, Greece 23/06/08
Perturbing singular systems and the correlating of uncorrelated random sequences
Pearce, Charles; Allison, Andrew; Abbott, Derek, International Conference on Numerical Analysis and Applied Mathematics, Corfu, Greence 16/09/07
Optimal multilinear estimation of a random vector under constraints of casualty and limited memory
Howlett, P; Torokhti, Anatoli; Pearce, Charles, Computational Statistics & Data Analysis 52 (869–878) 2007
Optimal estimation of a random signal from partially missed data
Torokhti, Anatoli; Howlett, P; Pearce, Charles, EUSIPCO 2006, Florence, Italy 04/09/06
An optimal linear filter for random signals with realisations in a separable Hilbert space
Howlett, P; Pearce, Charles; Torokhti, Anatoli, The ANZIAM Journal 44 (485–500) 2003
Method of recurrent best estimators of second degree for optimal filtering of random signals
Torokhti, Anatoli; Howlett, P, Signal Processing 83 (1013–1024) 2003
On some inequalities for the moments of guessing mapping
Dragomir, S; Pecaric, Josip; Van Der Hoek, John, Mathematical Journal of Ibaraki University 34 (1–16) 2002
Positive random variables and the A-G-H inequality
Pearce, Charles, Australian Mathematical Society Gazette 27 (91–95) 2000

Advanced search options

You may be able to improve your search results by using the following syntax:

QueryMatches the following
Asymptotic EquationAnything with "Asymptotic" or "Equation".
+Asymptotic +EquationAnything with "Asymptotic" and "Equation".
+Stokes -"Navier-Stokes"Anything containing "Stokes" but not "Navier-Stokes".
Dynam*Anything containing "Dynamic", "Dynamical", "Dynamicist" etc.