Expand the section navigation mobile menu

Department Colloquium

2024-2025 Colloquium

Unless indicated otherwise (*), the talks will be held 12-12:50 p.m. Tuesday in 372 MSC, with refreshment and conversation from 11:30 a.m. - noon in 368 MSC.

October 15*, Lucian Mazza (Oakland University) Higher Order Matching Preclusion for Regular Interconnection Networks

For a graph with an even number of vertices, the matching preclusion number is the minimum size of a set of edges whose deletion results in a graph with no perfect matchings. This number, and some variations with special requirements (e.g. we might forbid that this edge set be all edges incident to some vertex) have been studied extensively. We'll briefly discuss some of this history, leading up to level 2 matching preclusion and then give sufficient conditions for a regular graph to have two related desirable characteristics. These sufficient conditions apply to the class of "pancake graphs," which we conclude possess these desirable characteristics for an interconnection network.

*The talk is on Tuesday, October 15, 11:30 a.m.-12:20 p.m. in 372 MSC

October 22, Jun Hu (Oakland University), Estimation of the Mean Exponential Survival Time under a Sequential Censoring Scheme

The exponential distribution is widely used in reliability analysis and life tests due to its simplicity and interpretability in modeling survival times. However, practical constraints such as experimental designs, time limits, and budgetary requirements often lead to censored data. In response to these challenges, we propose a novel sequential censoring scheme that integrates features of both Type I and Type II censoring. Within this framework, we introduce two novel estimation methods: a purely sequential procedure and a two-stage procedure for estimating the mean survival time of an exponential distribution. Each procedure is designed to balance estimation accuracy with cost-efficiency by minimizing the number of required observations. Extensive sets of Monte Carlo simulations demonstrate the remarkable performance of both methods in terms of sampling efficiency and risk optimization. To illustrate their real-world applicability, we implement these methods to assess the reliability of various Backblaze’s hard disk models.

October 29, Henry So (Oakland University), Enhancing Imputation Methods with Machine Learning: Applications in One-Shot Devices and Spirometry Data

In this talk, we propose a cluster-based multiple imputation method and evaluate its performance in addressing missing data within two distinct datasets: one-shot devices and spirometry measurements. We use clustering techniques like K-prototypes, DBSCAN, or hierarchical clustering to categorize people into latent groups, assuming comparable features to those in other groups. We estimate missing data properly inside each cluster using multiple imputation with fully conditional requirements.

Our technique is useful for evaluating one-shot devices like airbags and fire extinguishers. Single-use items sometimes contain missing observations in public databases, making dependability evaluations difficult. Our imputation for the Crash Report Sampling System (CRSS) datasets from the National Highway Traffic Safety Administration (NHTSA) enables the reliability analysis using public datasets representing actual user situations.

Next, we apply our technique to a spirometry dataset from the Canadian Longitudinal Study on Aging (CLSA). Population-based studies of COPD diagnosis depend on spirometry, a lung capacity measure. Contraindications and latent features make spirometry variables "co-missing." Simple multiple imputation approaches yield less trustworthy findings than our method.

Our results demonstrate the efficacy of cluster-based multiple imputation. We solved missing data in one-shot devices and spirometry measurements with our strategy. Our study improves data quality and yields more reliable analytical results, proving its practical utility.

October 8, Matthew Toeniskoetter (Oakland University), Interval Rings

Interval rings comprise an interesting and approachable class of one-dimensional integrally closed local overrings of a two-dimensional regular local ring. In this talk, we will give the explicit construction of interval rings, we will discuss the nice properties that these rings possess, and we will discuss possible generalizations to higher-dimensional spaces.

October 1, JD Nir (Oakland University), I think games on graphs are neat

Games on graphs, including pursuit-evasion games and influence spreading games, make up a discipline of graph theory that straddles the theoretical/applied line. In this talk we'll take a look at some famous games on graphs, discuss the practical advantages of studying games on graphs, and then I'll highlight a few interesting techniques and results from my work in the field.

September 17, Daozhi Han (University at Buffalo), A quasi-incompressible Cahn-Hilliard-Darcy model for two-phase flows in porous media

Two-phase flows in porous media is known as the Muskat problem. The Muskat problem can be ill-posed. In this talk we introduce a quasi-incompressible Cahn-Hilliard-Darcy model as a relaxation of the Muskat problem. We show global existence of weak solution to the model. We then present a high order accurate bound-preserving and unconditionally stable numerical method for solving the equations.

September 12, Yongjin Lu (Oakland University), Luenberger Compensator Theory for Fluid-Viscoelastic-Structure Interaction with Interface Feedback Control

In this work, we explore the continuous theory of the Luenberger dynamic compensator problem for a prototype fluid-structure interaction model, which involves the interplay between a fluid and a solid structure submerged in the fluid. The structure in the model is assumed to be made of a viscoelastic material and is therefore subject to Kelvin-Voigt damping. Feedback control is implemented at the interface between the fluid and the structure. To address the challenge of eliminating the fluid pressure, we employ a nonstandard analytical technique that involves constructing an elliptic operator. We formulate the problem within an abstract functional analysis framework. Delicate partial differential equation (PDE) energy estimates determine the definition of the interface feedback control. We show that the Luenberger compensator developed in this work can asymptotically approximate the state of the original system by tracking only partial information about the full state at the interface, which may otherwise be inaccessible and difficult to measure. This approach provides a practical solution for various applications.

 

2023-2024 Colloquium

April 9, Jorge de Mello (Oakland University) Integral points in orbits and multiplicative dependence

In arithmetic and in its emerging branch called arithmetic dynamics, height functions have been ubiquitous objects of study and have important properties, characterizing sets of integral points in orbits and often detecting preperiodic points. In this talk, we aim to review some results about the sparsity of integral points lying in orbits of morphisms in varieties over number fields in light of the theory of heights and present some of their consequences on degeneracy in multiplicative dependence results and arithmetic dynamics.

April 2, M B Rao (University of Cincinnati) Ronald Aylmer Fisher – Father of Modern Statistics – The Good, the Bad, and the Ugly About Him – Turmoil Surrounding Him

A sub-theme: Proposing the triumvirate (Pearson, Fisher, Rao) as the architects of Modern Statistics – Making a Case

There is absolutely no doubt that Ronald Fisher is the architect of the modern mathematical statistics. However, there are some dark forces knocking him off the pedestal based on events unjustifiably based upon. In this presentation, we trace these developments and try to make a case for the restoration of his eminence.

March 12, Nitis Mukhopadhyay (University of Connecticut-Storrs) Some Personal Reflections on Sufficiency-Completeness-UMVUE and Related Issues

This presentation will focus on core concepts from statistical inference including sufficiency, completeness and UMVUE. One will observe that Rao-Blackwellization and Basu’s theorem play big hands as I sieve through some of the fundamentals. I often bump into a surprising result or clarification unexpectedly which presents itself as I turn one corner or another, and I feel energized. Such is a pathway, filled with fun, which has frequently pushed me to cross over the customarily finite boundaries of teaching to infinite freedom of research. I will share my enthusiasm with examples and counter-examples.

November 7, Krzysztof Fidkowski (University of Michigan) Output-Adaptive Hybridized Discontinuous Finite Elements for Efficient Flow Computations

Discontinuous Galerkin (DG) methods have enabled accurate computations of complex flow fields, yet their memory footprint and computational costs are large. Hybridized discontinuous Galerkin (HDG/EDG) methods reduce the number of globally-coupled degrees of freedom by decoupling elements and stitching the unknowns together through equations of weak flux continuity. An adjoint-weighted residual provides the error estimate in a chosen output, and localized to elements, this error indicator is used to optimize the computational mesh. The mesh optimization includes order (p) refinement and remeshing (h). Both steady and unsteady problems are considered, with the latter requiring attribution of the error to the spatial and temporal discretizations. The applications of interest include external aerodynamics with compressible flows at high Reynolds numbers.

November 2*, Huong Tran (Oakland University) Graphical Assessment of Univariate and Multivariate Normality and Other Probability Distributions

The assumption of normality plays an important role in numerous statistical analyses, and making an accurate assessment of this assumption is vital. Due to the complexity of high dimensional data, most of the existing hypothesis testing-based techniques have low power and fail to give much insight as the decision is just reduced to one number and corresponding p-value. Graphical methods, although more informal, shed more insight into the extent of departure from normality. With visual representations, one can gain deeper insights into the characteristics of the distribution of the data.

In this work, we introduce certain graphical tools to assess univariate and multivariate normality based on the derivatives of the cumulant generating functions. These enable us to assess the departures due to the skewness of the data and the thickness of the distribution towards the tails. In the case of univariate data, the method can also be extended to test other distributional assumptions for the data.

October 31, Tamás Horváth (Oakland University) Embedded or Hybridized Discontinuous Galerkin Methods? Maybe both?

Discontinuous Galerkin methods have been developed for many applications but are usually criticized for the larger number of unknowns compared to continuous Galerkin discretizations. The Hybridizable DG method overcame this issue by introducing additional facet unknowns and reducing the problem to the facet unknowns using static condensation. The Embedded DG methods further reduced the number of unknowns by using continuous basis functions for the facet unknowns. However, in certain applications, such as incompressible fluids, one can consider using different trace functions for the different variables, which leads to the Embedded-Hybridizable DG methods. This talk will present some applications of these methods and discuss the advantages and disadvantages.

October 24, Giselle Sosa Jones (Oakland University) Numerical simulation of multiphase flow in porous media

Modeling the flow of liquid, aqueous, and vapor phases through porous media is a complex and challenging task that requires solving nonlinear coupled partial differential equations. In this talk, we propose a second-order accurate and energy-stable time discretization method for the two-phase flow problem in porous media. We prove the convergence of the linearization scheme and demonstrate the energy-stability property. Our spatial discretization uses an interior penalty discontinuous Galerkin method, and we establish the well-posedness of the discrete problem and provide error estimates under certain conditions on the data. We validate our method through numerical simulations, which show that our approach achieves the theoretical convergence rates. Furthermore, the numerical examples highlight the advantages of our time discretization over other second-order approximations.

October 17, Susan Strong, Theo Cauc, Greg Antoine (Altair) Meet Altair and learn about your path to success!

Altair is global leader in computational science and AI, converging software, and cloud solutions in simulation, HPC, and data analytics.

October 10, Chen Liu, (Purdue University) Recent Progress on Positivity-preserving High-order Accurate Implicit-explicit Algorithm for Compressible Flow Simulations

In many demanding gas dynamics applications such as hypersonic flow simulation, the compressible Navier–Stokes equations form one of the most popular and important models. In this talk, we propose a fully discrete implicit-explicit scheme for solving the compressible Navier–Stokes equations within the operator splitting framework. In our approach, the compressible model is split into a hyperbolic subproblem and a parabolic subproblem, which represent two asymptotic regimes: the vanishing viscosity limit and the dominant of diffusive terms. We utilize the Runge–Kutta discontinuous Galerkin method and interior penalty discontinuous Galerkin method to discrete the hyperbolic subproblem and the parabolic subproblem, respectively.

Our proposed scheme preserves conservations of arbitrary high order in space. The positive-preserving property for up to Q3 space discretization is rigorously provable via using the monotonicity of the system matrix. For even higher order Qk (k ≥ 4) space discretization, the numerical schemes can be rendered bound-preserving without losing conservation and accuracy, by a post-processing procedure of solving a constrained minimization in each time step. Such a constrained optimization can be formulated as a nonsmooth convex minimization problem, which can be efficiently solved by Douglas–Rachford method if using the optimal algorithm parameters. By analyzing the asymptotic linear convergence rate of the generalized Douglas–Rachford splitting method, optimal algorithm parameters can be approximately expressed as a simple function of the numerical solutions. For each time step, the computational cost for the Douglas–Rachford splitting to enforce bounds and conservation up to the round-off error is of order O(N), where N denotes the total number of mesh cells. Our scheme enjoys the standard hyperbolic CFL on time step size upper bound selection. Numerical experiments suggest that our scheme produces satisfactory non-oscillatory solutions when physical diffusion is accurately resolved. Therefore, it is highly preferred and well-suited for simulating realistic physical and engineering problems.

October 5, Peter Shi (Oakland University) A rigorous justification of buy-low and sell-high for stocks

The author proves that a novel formulation of the Efficient Market Hypothesis (EMH) leads to a rigorous justification of the old market adage: buy-low and sell-high. In particular, the benchmarks for "low and high" are established through a min-max operation on the Bayes's error. This explicit connection between the EMH and Statistical Arbitrage represents a novel contribution to  quantitative trading. The resulting system is extensively back-tested using historical data to support the theoretical findings.

September 26, Huan Lei (Michigan State University) A machine-learning based non-Newtonian hydrodynamic model with molecular fidelity

A long standing problem in the modeling of non-Newtonian hydrodynamics of polymeric flows is the availability of reliable and interpretable hydrodynamic models that faithfully encode the underlying micro-scale polymer dynamics. The main complication arises from the long polymer relaxation time, the complex molecular structure and heterogeneous interaction. We developed a deep learning-based non-Newtonian hydrodynamic model, DeePN$^2$, that enables us to systematically pass the micro-scale structural mechanics information to the macro-scale hydrodynamics for polymer suspensions. The model retains a multi-scaled nature with clear physical interpretation, and strictly preserves the frame-indifference constraints. We show that DeePN$^2$ can faithfully capture the broadly overlooked viscoelastic differences arising from the specific molecular structural mechanics without human intervention.

September 7, Aycil Cesmelioglu (Oakland University), Numerical study of the Stokes/Darcy model and related problems

The coupling of free flow and porous media flow, which is governed by the Stokes/Darcy model, arises in a wide range of physical processes such as the transport of contaminants via surface/subsurface flow. In this talk, we will look at this problem and a couple of other related problems from a numerical perspective using the hybridizable discontinuous Galerkin (HDG) method. We will discuss the properties and well-posedness of the numerical schemes and their a priori error estimates. Supporting numerical experiments will be presented.

View information from previous Department Colloquiums

Department of Mathematics and Statistics

Mathematics and Science Center, Room 368
146 Library Drive
Rochester , MI 48309-4479
(location map)
phone: (248) 370-3430
fax: (248) 370-4184


Hours:
Monday–Friday: 8:00–11:59 a.m. and 1:00–5:00 p.m.