Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

An Alternative to the Log-Likelihood with Entropic Optimal Transport

less than 1 minute read

Published:

This paper explores the entropic optimal transport (EOT) loss and its estimator in parameter estimation, comparing its advantages over traditional likelihood methods, such as improved robustness, faster convergence, and resilience to bad local optima, with a focus on theoretical justification and experimental validation in Gaussian Mixture Models.

See paper | GitHub Repository

Discrete Morse Theory for Relative Cosheaf Homology

less than 1 minute read

Published:

This paper aims to generalize discrete Morse theory in the context of relative cosheaf homology on filtrations of finite simplicial complexes, enabling faster computations. These methods are extended to persistent cosheaf homology for longer filtrations.

See paper

Robustness in Neural ODEs and SDEs

less than 1 minute read

Published:

Recent studies show that Neural ODEs are more robust against adversarial attacks than traditional DNNs, but as complexity increases, concerns about robustness and expressivity arise, prompting exploration of stochastic noise regularization.

See paper | GitHub Repository

Spectral Methods for Clustering in Finance

less than 1 minute read

Published:

This report gathers tools from spectral graph theory to analyze stock market relation graphs, focusing on spectral embedding for positioning companies in Euclidean space and exploring graph entropy to classify graphs and detect regime changes, with a generalization to directed weighted graphs and in-depth explanations of the underlying concepts and algorithms.

See paper

Forest Classification - Kaggle Challenge

less than 1 minute read

Published:

This Kaggle challenge involved a classification problem (with 7 different classes) based on a dataset of forest parcels. The data consisted of 55 columns, including 11 numerical variables and 2 categorical variables (with 4 and 40 classes, respectively). To tackle this problem, we employed strategies detailed chronologically in this report. After an initial data exploration phase and attempts at dimensionality reduction, we tested several classic algorithms and proceeded with optimizations where possible.

See paper

mini_projects

Denoising Levy Probabilistic Models (DLPM)

Shariatian, D., Simsekli, U., & Durmus, A.O. (2024). Denoising Lévy Probabilistic Models. ArXiv, abs/2407.18609.

Published:

This paper introduces a novel framework to use heavy-tailed noise in the denoising diffusion paradigm, which constitutes a generalization of the original DDPM method. Using heavy-tailed noise is shown to bring benefits in various contexts: heavy-tailed data distributions, better robustness to class imbalance, and smaller computational time.

See paper | See slides

Piecewise deterministic generative models

Bertazzi, A., Durmus, A.O., Shariatian, D., Simsekli, U., & Moulines, É. (2024). Piecewise deterministic generative models. ArXiv, abs/2407.19448.

Published:

We introduce a novel class of generative models based on piecewise deterministic Markov processes (PDMPs), which combine deterministic motion with random jumps. Like diffusions, PDMPs can be reversed in time. We derive explicit expressions for jump rates and kernels in the time-reversed processes and propose efficient training methods and approximate simulation techniques. Additionally, we provide bounds on the total variation distance between the data and model distributions, supported by promising numerical simulations.

See paper

news

MAA106 Teaching

Published:

Finished TA’ing MAA106 numerical analysis course in Ecole Polytechnique. Thanks Maxime Breden for the teaching experience and congrats to the students!

PDMP on the Road

Published:

Presenting PDMP at both NeurIPS in Paris and official NeurIPS in Vancouver.

Diffusion Reading Group Launch

Published:

Launching the diffusion model reading group in Inria Paris. Feel free to reach out if you want to join!

Oberwolfach Workshop

Published:

Attending the mini-workshop ‘Statistical Challenges for Deep Generative Models’ in legendary Oberwolfach, Germany.

Visit to Padova

Published:

Invited in Padova by Giovanni Conforti to explore cosmological applications of diffusion models. Looking forward to working with such a great person!

Sakana AI Internship

Published:

I am starting an internship at Sakana AI in Tokyo, advised by legendary Stefano Peluchetti (who discovered flow matching a year before… flow matching).

Sakana Research Retreat

Published:

I am co-organising a 5-day research retreat with Sakana’s research staff!

portfolio

publications

Piecewise Deterministic Generative Models

Bertazzi A., Shariatian D., Durmus A.O., Simsekli U., Moulines É.

Published in NeurIPS 2024, 2024

We introduce a novel class of generative models based on piecewise deterministic Markov processes (PDMPs), which combine deterministic motion with random jumps. Like diffusions, PDMPs can be reversed in time. We derive explicit expressions for jump rates and kernels in the time-reversed processes and propose efficient training methods and approximate simulation techniques. Additionally, we provide bounds on the total variation distance between the data and model distributions, supported by promising numerical simulations.

See paper | GitHub Repository

Denoising Levy Probabilistic Models (DLPM)

Shariatian D., Simsekli U., Durmus A.O.

Published in ICLR 2025, 2025

This paper introduces a novel framework to use heavy-tailed noise in the denoising diffusion paradigm, which constitutes a generalization of the original DDPM method. Using heavy-tailed noise is shown to bring benefits in various contexts: heavy-tailed data distributions, better robustness to class imbalance, and smaller computational time.

See paper | See slides | GitHub Repository

Discrete Markov Probabilistic Models: An Improved Discrete Score-Based Framework with sharp convergence bounds under minimal assumptions

Shariatian D.*, Pham L.T.N.*, Ocello A., Conforti G., Durmus A.O.

Published in ICML 2025, 2025

This paper introduces a novel framework for discrete data generation on the hypercube ${0, 1}^d$. We establish theoretical and methodological alignment with classical continuous score-based modesls. We demonstrate the effectiveness of this approach on low and high dimensional datasets (Binary MNIST), beating other state-of-the-art methods like Discrete Flow Matching

See paper | GitHub Repository

talks

teaching

Numerical Analysis MAA106

Undergraduate course, Ecole Polytechnique, CMAP, 2024

TA’d a 4 months course for 1st year students on numerical analysis.