layout | title | description | header-img |
---|---|---|---|
page |
List of tweets |
Main tweets |
img/birds.jpg |
- Differentiable max & argmax operators are defined through concave
- Curve approximation using Fourier series corresponds to drawing the curve
- Navier Stokes equation describes the motion of an incompressible viscous
- Lloyd algorithm can be applied on a surface using the geodesic Voronoi diagram.
- Hölder inequality generalizes Cauchy-Schwarz to arbitrary exponents.
- Approximation vs quantization: two ways to compress.
- Hopfield networks are recurrent networks minimizing an Ising-type energy
- Heat, Wave and Schrodinger equations are fundamental linear partial
- Minkowski inequality states that the l^p norm is a norm.
- Haar transform: only + and - yet surprisingly powerful. The first example
- Pullback of functions and pushforward of measures are dual one with each other.
- The english (very rough) translation of my book "The Discrete Algebra of
- The hairy ball theorem states that in odd dimension d, vector fields on
- Representation theory defines a Fourier transform on groups. Finite
- Shapiro’s inequality: for small integers, cyclic sums of quotients are
- Understanding global convergence of Newton method is hard ...
- The Wasserstein and Hellinger distances between Gaussians can be computed
- The geodesic distance extends a local metric into a global one. Geodesic
- Elastic net (Zou , Hastie, 2005) interpolates between the Lasso (l1
- Error checking code with an additional parity bit defines a simplex in the
- Pinsker inequality is one of the most fundamental inequality in
- The Levenberg-Marquardt is a standard method for non-linear least-squares,
- Non-circular gears generate non-regular motions from regular one.
- Principal component analysis and non-negative matrix factorization are two
- Minimizing sum of p-powers of the distance generalizes the mean (p=2). For
- Curse and blessing of non-smooth optimization: non-smooth parts (eg
- The average distance to origin of a Brownian motion at time t grows like
- Continued fractions are in some sense optimal ways to approximate with
- Fourier transforms comes with lots of flavors. Sampling and periodization
- Reverse mode automatic-differentiation is based on a « backward »
- The mean curvature motion is the most fundamental curve evolution.
- A Riemannian manifold is locally an Euclidean space. An embedded surface
- The integral curves of a vector field tangent to a manifold stays on the
- The primary visual cortex is organised as an oriented wavelet transform.
- Approximation using anisotropic triangulations performs optimal
- The distance field is the unique viscosity solution of the Eikonale
- The subdifferential is the set of slopes bellow the graph of a convex
- A 1 hidden layer perceptron can approximate arbitrary continuous functions
- Fourier is to convolution what Legendre is to inf-convolution. … …
- The Reeb graph of a function on a manifold is the graph of level sets
- Schatten p-norms are the l^p norms of the singular values. Define algebra
- A Brownian motion is approximation by a discrete random walk with Gaussian
- Discrete Cosine Transforms are eigenvectors of Laplacians with different
- The Fermat–Torricelli point minimizes the sum of the distance to the
- Comparison of the Wasserstein, Hellinger, Kullback-Leibler and reverse KL
- Zero crossing of the derivative of a blurred version (heat diffusion) of a
- Csiszar divergences compare the relative distributions of two histograms with 1.
- Eigenvalues of random matrices with iid entries converge to the Wigner
- Filtered back-propagation is a formula for the invert of the Radon
- A sub-group of Möbius transforms are those preserving the Poincaré upper-
- Iteratively joining the mid points of the edges of a polygon converges
- Optimization can be carried over on manifolds using gradient descent,
- Implementing a numerical function corresponds to the creation of a
- The space of metric spaces is a metric space for the Gromov-Hausdorff
- Vitali Milman's insight: convex sets in high dimension look like hedgehogs
- Taylor series expansion of sine.
- Störmer, Verlet and leapfrog are different names for the same method.
- Tutte theorem defines a valid drawing of planar graphs by solving a linear
- Varadhan formula shows that at first order, the heat kernel is equal to
- Linear approximation selects the first coefficients of a decomposition in
- The Marchenko-Pastur law is the distribution limit of eigenvalues of
- The fractional Laplacian is a differential operator generalizing the
- Summing the log of the nearest neighbors distances: a simple estimator for
- Line integral convolution is an anisotropic filtering which averages
- The heat equation can be applied on a surface and to a surface.
- Penrose tiling are aperiodic tilings obtained using typically 2 isosceles
- Code words defined using a Huffman tree generate codes with optimal
- Lorenz attractor is the set of limit trajectories for a simplified
- Radial basis functions performs interpolation and approximation by solving
- Self-organizing maps of Kohonen defines a map between a 2D space to the
- Behavior of dynamical system depends on the eigenvalues. In 2D it can be
- The convex hull is the smallest enclosing convex set. It is also the
- The binomial law is a sum of Bernoulli 0/1 random variables. After
- The heat equation can be applied to diffuse probability densities (in
- Taylor series of exp(it) up to degree 15.
- Leapfrog/Verlet are symplectic integrators which approximately conserve
- K-nearest neighbors is the baseline method for non-parametric
- Power iterations does not converge for matrices with multiple complex
- Tensor-driven anisotropic diffusion solves the heat equation on a manifold
- Stochastic gradient method: works also for infinite sums and expectation.
- The proximal operator is the implicit counterpart of the explicit gradient
- Iterative projections: converge in theory for convex sets. Works great in
- The damped harmonic oscillator (linear) vs simple gravity pendulum (non-
- Lagrangian discretization of measures (quotient space) vs. Eulerian
- Thin plate spline is one of the most popular data interpolation methods.
- Impact of the interpolation method on the level set of the resulting function.
- Dual number is a convenient way to implement forward mode of automatic
- Daubechies wavelets is a parametric family of orthogonal Wavelets.
- The mean-shift algorithm is a clustering method that progressively evolves
- The Helmholtz-Hodge decomposition split vectors fields in two orthogonal
- The logistic map can have complex patterns of adherence points depending
- The co-area formula relates the total variation to the perimeters of the
- The gradient flow of the Dirichlet energy is the heat equation, which blur
- A Sobolev ball is an ellipsoid in infinite dimension aligned along the
- John’s and Löwner’s ellipsoids are polar one from each others.
- The kernel of a domain is the (convex) set of locations from which you can
- Only harmonic spring and gravity central forces produce periodic motions.
- Poljak heavy ball method speeds up gradient descent by introducing
- Lotka–Volterra equations is a non-linear 2D ODE modeling prey/predator
- The harmonic oscillator is the canonical linear second order ODE.
- Progressive decompression of a 3D mesh.
- I have updated my course notes on automatic differentiation (last section
- I have written a text on "the mathematics of neural networks" for a
- The l1 norm achieves the best balance between convexity and sparsity-
- Kriging (aka Wiener interpolation) accounts for uncertainty in kernel
- The Game of Life is the most celebrated 2D cellular automaton.
- The Delaunay triangulation in dimension can obtained by computing the
- J'ai rédigé un article de vulgarisation sur "les mathématiques des réseaux
- Many properties (convexity, derivative, stratifiability, partial-
- Comparing probability distributions: Csiszár f-divergences measure «
- Marden theorem states that the roots of P’’ are the focal point of the
- The marching squares/cubes is the standard algorithm to extract iso-
- The Legendre transform can be approximated in O(n*log(n)) using a Gaussian
- Lloyd algorithm can be used to progressively diffuse points, defining a
- I have released 115 animations on Wikimedia Commons, feel free to use them
- Gauss-Luca theorem shows that the roots of derivative of a polynomial
- On can map a surfaces to sphere by minimizing a Dirichlet energy.
- Interpolating linearly polynomials of a fixed degree defines an
- Rational Bezier curves generalizes Bernstein approximation using weights.
- Diffeomorphisms (warpings) are conveniently described in a Lagrangian way
- Shepard interpolation generalizes nearest neighbor interpolation.
- Power iterations converge to the leading eigenvector of a matrix.
- Taylor series approximate within its radius of convergence a function by a
- Comparing 2D linear interpolation methods of order 0, 1 and 3.
- The simple gravity pendulum is one of the simplest non-linear second order
- Jacob Bernoulli, Ars Conjectandi, 1713. Introduces many concepts central
- The Fast Fourier and Wavelet transforms correspond to two sparse
- Harmonic functions are obtained by solving Laplace equation and define
- Loyds algorithm defines a segmentation of the domain using Voronoi cells.
- The heat can be applied to diffuse probability density (in particular,
- I have put online the course notes for my master course on optimization
- « Greedy » gradient descent using optimal step choice at each iteration
- Csiszár divergences is a unifying way to define losses between arbitrary
- The QR decomposition can be computed in a numerically stable way using
- Parabolic PDEs (e.g. heat) smooth out singularities. Hyperbolic PDEs (e.g.
- Banach fixed point theorem ensures existence of a unique fixed point for
- Having partial derivatives (being differentiable along the axes) does not
- The Wasserstein-1 distance (which is a norm!) between histograms on graphs
- Bernstein polynomial can be used to approximate 1D functions but also 2D
- The medial axis (aka skeleton) generalizes the mediatrix between two
- The logistic map is the simplest dynamical system which exhibits
- The Optimal Transport geometry of 1-D Gaussians is flat in the (mean,std)
- Reproducing Kernel Hilbert spaces define norms on functions so that
- Step size selection is important for gradient descent on ill-conditioned
- Reaction-diffusions are non-linear PDEs which describe the formation of a
- The Weierstrass function is continuous if a<1 but nowhere differentiable
- Monotone operators (Minty, Browder) generalize monotone functions.
- The SVD decomposes the action of a matrix into rotations and scalings
- The QR decomposition can be computed in a numerically stable way using
- Non-uniform B-splines functions are smooth piecewise polynomial functions,
- One can encode or approximate convex shapes as intersection of the semi-
- Total variation denoising (Rudin, Osher, Fatemi 92) was studied by the
- Pinsker's inequality is a fundamental inequality of information theory.
- Training a MLP with a single hidden layer is the evolution of a sparse
- Iterative closest point is one of the basic algorithm for rigid shape
- Adding an independent Gaussian variable is the same as doing a heat
- Fixed points can be attractive or repulsive depending on the derivative of
- Parametric density fitting is a parameter estimation problem aiming at
- Harris’s methods is the most frequently used corner detector, based on the
- Schrodinger’s problem is an approximation (regularization using diffusion)
- The Travelling salesman problem is one of the most well know NP-hard
- A mixture of Gaussians model can be represented as a weighted points cloud
- « Metaballs » are levelsets of mixtures of radial basis functions, which
- Burgers’s equation is one of the simplest non-linear partial differential
- Fourier approximation of a cat.
- Cellular automata are discrete dynamical models defined by a local
- Positive 1D polynomials are sums of squares. Motzkin’s 2D polynomial is
- The Courant-Friedrichs-Lewy (CFL) condition relates time/space step sizes
- Strassen algorithm reduces the complexity of matrix multiplication in term
- Beside being useful to prove Weierstrass polynomial approximation theorem
- Lloyd’s algorithm is the continuous counterpart of k-means. Optimizes the
- 3D surface compression is achieved by projecting on Fourier-like atoms
- Can you hear the shape of a drum? The eigenvectors of the Laplacian depend
- Kalman filter defines recursively an estimator of the parameters of a
- The determinant is log-concave function. Determining the Loewner enclosing
- The cone of positive semi-definite matrices is a fundamental object of
- The Fast Fourier Transform: « the most important numerical algorithm of
- The Laplacian of a graph is a semi-definite positive operator which mimics
- The equation XAX=B is surprisingly simple to solve on the set of positive
- Solving linear systems comes with different flavors.
- Robust regression is obtained by using a l1 loss in place of least
- For saddle point problems and more general games, gradient descent can
- Exponential families hybridize convex analysis and statistics.
- A joint distribution encodes the dependencies between its two marginal
- Least square is the most fundamental data analysis method. Gauss or
- Zeros of random polynomials define point processes. For iid coefficients,
- The max and soft-max (log-sum-exp) of convex functions is convex.
- A mean field evolution is the large number limit of particles systems. The
- Birkhoff’s contraction for Hilbert’s metric is a key tool to quantify
- Gaussian scale mixtures defines heavy-tail distribution by combining
- SMACOF is the most popular stress minimization algorithm for
- The Shapley-Folkman-Starr theorem states that the Minkowski sum «
- The Optimal Transport interpolation between two Gaussians is a Gaussian.
- Runge-Kutta's formula are the workhorse for the numerical integration of
- The Kabsch-Nadas formula solves in closed form the orthogonal least square
- Sparse cholesky factorization are obtained by applying the algorithm after
- K-means++ defines a seeding strategy which is approximately optimal up to
- The continuous wavelet transform was introduced by Grossman and Morlet.
- NP-complete problems are problems in NP which are NP-hard.
- Slides for my talk "Off-the-grid Sparse Estimation". Summarizes years of
- Lee and Seung algorithm is the most popular matrix factorization
- We organise a conference (in French, sorry!) for the best PhD thesis
- When regularizing « optimal transport like » problems, you should use the
- Kolmogorov-Arnold superposition Theorem answers (in negative) Hilbert's
- The Wasserstein distance over Gaussians defines the so-called Bures
- The A^* heuristic path planing algorithm reduces the search space of
- New preprint: "Sinkhorn Divergences for Unbalanced Optimal Transport".
- Cauchy–Binet formula generalizes the determinant of a product of square
- Spectrahedra are matrix generalizations of polyhedra. The class of
- Lagrange and Hermite interpolations can be solved in closed form using
- Spherical interpolation (SLERP) of quaternions defines the geodesic over
- Very nice blog post from Song Mei on the replica method from statistical
- K-means algorithm computes stationary point of the quantization error by
- W. Zhou, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image quality
- Iterative least square minimizes a robust loss functions (eg l1 norm) by
- L2 (ridge) vs L1 (lasso) regularizations for the regularization of
- In the static regime, the electric field generated by point sources is the
- Point estimators are defined by minimizing an average risk under the
- JPEG compression performs a weighted entropic coding of discrete cosine
- Non-local means computes an adaptive filtering by comparing patches in
- Gabor and Wavelets are linear dictionaries offering different tilings of
- Smoothing splines define regularized least square whose solutions are sums
- Heat vs wave equations.
- The Johnson-Lindenstrauss lemma shows that one can project linearly m
- The dead leave model defines a stationary random distribution of sets by a
- Independent component analysis estimates a mixing matrix leveraging the
- The travelling salesman problem looks for the shortest Hamiltonian cycle
- Determinantal (resp permanental) processes are repulsive/fermionic (resp.
- The Wigner-Ville distribution mimics a probability distribution on the
- Low frequency approximation of discontinuous functions generates Gibbs
- Principal component analysis and non-negative matrix factorization are two
- The Brachistochrone problem was solved by Bernoulli and is the birth of
- Moreau’s decomposition generalizes the orthogonal decomposition from
- In the static regime, the electric / magnetic fields generated by point
- Heat diffusion vs. wave equation on a surface.
- Matrix decompositions come in many flavors!
- n-ellipses generalize ellipses with n foci. Spectrahedra (hence convex)
- Moreau's decomposition generalizes orthogonal decomposition from linear
- Reverse mode automatic differentiation computes the gradient with the same
- Non-linear approximation in a Haar wavelets basis performs an adaptive
- Gaussian functions are stable under pointwise and convolution products.
- Copula remaps the cumulative function to have uniform marginals. It is a
- Strong convexity and smoothness are the two key hypotheses to make
- A paper I enjoyed reading recently: A. P. Bartok, R. Kondor and G. Csanyi:
- Shepard interpolation is a surprisingly simple multi-dimensional
- A paper (small book) I enjoyed reading: Spectral Graph Theory, Chung.
- A paper I enjoyed reading recently: « The space of spaces (...) » KT
- Parabolic PDEs come with lots of flavors: heat, total variation,
- A paper I enjoyed reading recently: Inverse problems in spaces of
- Conditional distributions defines a parametric collection of probability
- A paper I enjoyed reading recently: Breaking the Curse of Dimensionality
- A paper I enjoyed reading recently: "From 3D models to 3D prints: an
- Photo-realistic texture synthesis methods perform pixels, patch or more
- A paper I enjoyed reading recently: Multiscale Representations for
- A paper I enjoyed reading recently: @julienmairal "Incremental
- Gradient descent on particles’ positions (Langrangian) is equivalent to an
- A paper I enjoyed reading recently: "Remarks on Toland’s duality,
- Paper I enjoyed reading recently: Learning with Fenchel-Young Losses,
- The Fourier transform diagonalizes convolution operators aka circulant
- A paper I enjoyed reading recently: The Complexity of Computing a Nash
- A paper that I enjoyed reading recently: Automatic differentiation of non-
- Calculus of variations studies infinite dimensional optimization problems
- A paper I enjoyed reading recently (poke @KyleCranmer) "Quantum optimal
- A paper I enjoyed reading recently: Rank optimality for the Burer-Monteiro
- Probability distributions can be approximated using Eulerian or Lagrangian
- A paper I enjoyed reading recently: "WHAT IS ... a Graphon?", Daniel
- A paper I enjoyed reading recently "Is computing with the finite Fourier
- Bayes formula relates the posterior distribution to the likelihood
- A paper I enjoyed reading recently: Sorting out typicality with the
- Regularity of optimal transport map is a difficult question, studied by
- A paper I enjoyed reading recently: Invariant and Equivariant Graph
- Recent paper I enjoyed reading: Hashimoto, Gifford, Jaakkola, Learning
- Durand–Kerner method: a surprisingly simple way of computing all the roots
- The dual of a triangulation is defined by joining the centroids of
- When rolling an oloid in straight line, all its surface touches the ground.
- Parametric surfaces embedded in Euclidean space define a Riemannian
- We are so glad to welcome @katecrawford for the inauguration of her
- 1D Optimal Transport distances between Gaussians is the Euclidean distance
- The push-forward operator is a linear map between measures which operates
- Perron-Frobenius is fundamental for the study of Markov chains
- Smooth functions can be decomposed as the sum of a convex and a concave
- The discrete Fourier basis is a sampling of the continuous Fourier basis.
- The gradient field defines the steepest descent direction. The gradient
- Any optimization problem is equivalent to a convex (linear) one (but
- The heat equation on polynomials (for the derivative on the complex plane)
- Brenier’s theorem solves the long standing question of existence of an
- Analyzing the global convergence of Newton is hard. Attraction areas are
- Nonlinearity matters. Linear diffusion (heat) has non-compactly supported
- Page rank is the leading eigenvector of a stochastic matrix. Can be
- The Hartley orthogonal matrix is associated to cas=cos+sin function.
- POCS, Dykstra and DR often succeed in finding points in the intersection
- Mirror-stratifiability (Drusvyatskiy/Lewis): generalizes duality of
- Support vector machines maximize the classification margin for linear
- There exists various discrete cosine transforms which correspond to
- Logistic regression defines fuzzy classification boundaries using the
- QR algorithm is a gem of numerical algebra: for a symmetric input, also
- Vizualizing zeros and poles of rational functions over the complex plane.
- The 2x2 SDP, degree 2 positive polynomials and 2-D ice-cream cones coincide.
- A new to learn about semi-discrete optimal transport, including stochastic
- The l^p functional is convex and hence a norm for p>=1. It is sparsity-
- 1D optimal transport interpolation corresponds to interpolating the
- k-nearest neighbors is the baseline non-parametric regression and
- The solution of the Eikonal equation is solved by advancing a front in the
- The Douglas-Rachford algorithm (the dual of ADMM) computes the projection
- As suggested by @sigfpe, interpolating between planar point clouds can be
- Dithering is the process of converting a real-valued image into a finite
- Eigenfaces are singular vectors of an image dataset. A principal component
- Monte Carlo integration approximates integrals at a rate 1/sqrt(n)
- Richardson–Lucy is the most celebrated deconvolution method under
- Projected gradient and Frank-Wolfe are the cornerstones of constrained
- Tutte embedding is the solution of a Poisson equation. Iterative
- Pruning the Voronoi diagram of a set of points estimates the medial axis
- Various discrepancies between 1D probability distributions are derived
- Reuleaux polygons and Meissner bodies are particular instances of bodies
- The gradient field defines the steepest descent direction. The gradient
- Tarski–Seidenberg theorem in action: the set of polynomials having a real
- Frechet, Gumbel and Weibull distributions are extreme value distributions
- Orthogonal matrices come with lots of flavours!
- The distance function is smooth away from the medial axis (aka the skeleton).
- A lattice is an ordered set where any two elements have an upper and lower
- Oldie but goldie. Karl Weierstrass, Über continuirliche Functionen eines
- Error checking code with an additional parity bit defines a simplex in the
- Compactly supported orthogonal wavelets are defined using filter banks.
- Heat equation vs wave equation inside a planar domain. Boundary conditions
- Filtering by convolution complex vectors defines a sequence of polygons.
- Woodbury formula allows one to compute in two different ways the solution
- Gradient flows in metric spaces formalize the notion of descent methods on
- Partial smoothness is the correct notion of "piecewise smooth functions"
- L Page, S Brin, The PageRank citation ranking: Bringing order to the web,
- Error diffusion is the simplest dithering algorithm which quantize an
- Vizualizing the attraction bassin of Newton’s method on polynomials as the
- Stein’s lemma is a surprisingly simple yet useful characterization of the
- Interpolating coefficients of polynomials generates interpolations between
- Tarski–Seidenberg theorem: semi-algebraicity is stable by projection. The
- QR algorithm is a gem of numerical algebra: iterative QR decomposition
- Principal component analysis projects the data on the leading eigenvectors
- Auto-regressive process generates dynamical textures, and can integrate
- The wavelet transform computes an optimally sparse representation of
- There exists a single increasing map between two distributions, it is the
- Joukowsky conformal map: solving airfoil design in the pre-computer era.
- The circle, semi-circle and Marčenko–Pastur laws are three simple examples
- Gibbs distributions are maximum entropy distributions. Used in conjunction
- Difference of convex (DC) programming are non-convex problem enjoying a
- Spherical interpolation (SLERP) of quaternions defines the geodesic over
- Sampling a signal is equivalent to periodizing its Fourier transform. For
- Lloyd’s algorithm is the continuous counterpart of k-means. Optimizes the
- I wrote a short article to present numerical Optimal Transport and its
- Zonohedra are images of l^inf ball by linear map while symmetric polyhedra
- Hypotrochoids are curves obtained by rolling a circle inside another one,
- If d is a distance, then d^p for 0<p<1 is its snowflake distance. Naming
- The Delaunay triangulation is the dual of the Voronoi diagram. The most
- Dijkstra’s algorithm computes the geodesic distance on a graph in
- The bilateral filter is a non-linear edge-preserving filter where weights
- Sunflower patterns are obtained using a Fermat spiral with a golden angle
- Spatially varying blurs can be approximately computed by merging together
- Markov chain Monte Carlo methods sample from a Gibbs distribution without
- Non-circular gears are ubiquitous and generate non-constant rotation
- The space of compact sets in a metric space is a compact set for the
- Conference "Imaging and machine learning" April 1st-5th @InHenriPoincare,
- Schatten p-norms are the l^p norms of the singular values. Define algebra
- Optimal quantization is a minimum Optimal Transport projection, as
- Projected gradient is the simplest algorithm to perform constrained
- The perspective transform turns a 1D convex function into a 2D positively
- Dynamics of a system of rods (multiple pendulum) is an ODE evolution on an
- Monge and Kantorovitch Optimal Transport are equivalent when the measures
- A new Numerical Tour ( on Multilayer Perceptron with a single hidden
- My book with Marco Cuturi on computational optimal transport has just been
- Brunn-Minkowski inequality is one of the fundamental inequalities in
- Not really oldies but goldies: @keenanisalive, C. Weischedel, M. Wardetzky
- Min/max games are convex/concave saddle points. Can be solved by primal-
- The Weierstrass function is continuous if a<1 but nowhere differentiable
- Cats, bunnies and elephants are ubiquitous in graphics and applied maths.
- The Fisher-Tippett-Gnedenko theorem is the central limit theorem for the
- Poisson disk process exhibits blue noise power spectrum which is ideal for
- The Laplacian pyramid is the ancestor of the wavelet transform. Defines a
- Foveation is a spatially varying « convolution ». It is similar to human
- Randomizing the phase of the Fourier transform is the most simple texture
- The mean-shift algorithm is a clustering method that progressively evolves
- Integral curves of a vector field joins stationary points of the field.
- Holomorphic functions (differentiable over complex numbers) defines
- Quadratic splines are basic tools for vector graphics in CAD. They can be
- Generalized Apollonian gaskets are fractal domains defined by
- Maury/Roudneff-Chupin/Santambrogio model of crowd motion with congestion
- Filtered back-propagation is a formula for the inverse of the Radon
- String art generates logarithmic spiral arcs by iteratively connecting
- Reaction-diffusion with spatially varying weights.
- Subdivision curves define smooth approximating or interpolating curves
- Optimal transport flow defines particles evolutions which is a l2 gradient
- Hilbert space filling curve defines a continuous map from a segment to a
- Stationary Gaussian fields are characterized by their power spectrum.
- Mean value coordinates extend usual barycentric coordinates to arbitrary
- Poisson disk process has all points separated from each other by a minimum
- Chaikin’s corner cutting is the simplest and the most famous approximating
- (Not so) oldies but goldies: Generative Adversarial Nets, @goodfellow_ian,
- The cepstrum of a time series is the inverse Fourier transform of the log
- Mean curvature motion is the most fundamental curve evolution. Equivalent
- n-ellipses generalize ellipses with n foci. Spectrahedra (hence convex)
- Warping from an arbitrary domain and a disk can be achieved by Tutte
- Bezier curve is barycentric interpolation using Bernstein polynomial
- ⚡️ “Oldies but Goldies”
- Tensor-driven anisotropic diffusion solves the heat equation on a manifold
- Snakes aka active contours are elastic curves minimizing a potential
- Conformal maps (differentiable over the complex plane) can be defined
- The distance function defines offset of shapes and curves. Its
- The structure tensor is the local covariance matrix field of the gradient
- MAX-CUT is an NP hard problem. Goemans and Williamson SDP relaxation with
- Dynamic Time Warping aligns two time series by an increasing map. Solved
- Gestalt theory defines grouping laws explaining human vision and its
- Burgers’ equation is the prototypical non-linear advection/diffusion
- The Minkowski sum is commutative, associative and distributive wrt union.
- The geodesic in heat method of Crane, Weischedel and Wardetzky
- Wavelet transform of an image iterates low pass and high pass filtering
- Subdivision curves exist in two flavors: interpolating and approximating.
- The poisson point process has a flat power-spectrum while the poisson disk
- The medial axis (aka skeleton) is the set of points where the distance is
- A quadratic bezier spline is the envelope of regularly spaced lines.
- Unbalanced Optimal Transport (OT) generalizes OT to allow for mass
- Comparing Haar wavelet approximations on an image and on a sphere.
- Fourier approximation of a cat.
- Apollonian gaskets are fractal domains defined by progressively packing
- The continued fraction approximation of pi at order 4 gives the
- The Fourier slice theorem relates the 1D Fourier transform of Radon
- Can you hear the shape of a drum? The eigenvectors of the Laplacian depend
- Hadamard–Rademacher–Walsh transform is the Fourier transform on the group
- The envelope of iterated equi-spaced connected points converges to
- Natural gradient is the gradient for the Riemannian structure on densities
- Mesh processing applies numerical methods (PDEs, optimization, etc) on 3D
- Positive 1D polynomials are sums of squares. Motzkin’s 2D polynomial is
- Fourier transform on groups turns convolution into multiplication. For
- Barycentric coordinates are uniquely defined in triangles, but there are
- Wasserstein (optimal transport) flow generalizes the evolution of a system
- Simplicial homology defines discrete derivatives (boundary operators)
- Vietoris–Rips and Čech complexes are fundamental simplicial complexes
- Regular 1:4 subdivision defines a multi-scale analysis on meshes.
- The minimum spanning tree of points for the Euclidean distance is a sub-
- Large scale N-body simulations require to evaluate sums involving long-
- Alpha-shapes is a multiscale family of simplicial complexes included in
- The basics of optical flow computation it to invert the ill-posed
- Classification and Regression Trees (CART) defines structured recursive
- Interpolating between implicit equations defines morphings between curves
- Eigenvectors of the Laplacian on compact planar domains define an
- The enveloppes of a set of curves is another curve tangent to all these
- A good triangulation to solve stably elliptic PDEs should contain
- A graph is planar if and only if it does not contain as minor the complete
- Surface parameterization computes a diffeomorphism between a 3-D surface
- Triangulated graph are maximal planar graphs, ie for which one cannot add
- The max of n independent Gaussians concentrates around sqrt(2log(n)). The
- The Marching Cubes is the the standard algorithm to extract an isosurface
- Regular 1:4 triangle subdivision allows one to define a Haar transform on
- Generative Adversarial Networks approximately solve parametric density
- The Fiedler vector of a graph is the second eigenvector of the Laplacian.
- Simplicial complexes are combinatorial objects generalizing
- The Fisher metric is the unique Riemannian structure on parametric
- Representation theory defines a Fourier transform on groups. Finite
- Convolutional neural networks are shift invariant representations obtained
- Reproducing Kernel Hilbert spaces define norms on functions so that
- Saddle points come with lots of flavors. Quadratic saddles are non-
- The cumulative function is to the max of random variable what the Fourier
- Some works of the Gauss prize winners.
- David Donoho just god the Gauss prize at ICM 2018! IMHO its four main
- The farthest point sampling is a greedy algorithm which asymptotically
- A Riemannian manifold is locally an Euclidean space. An embedded surface
- Color perception can be modeled using a Riemannian manifold. The
- Continued fractions are in some sense optimal ways to approximate with
- I will cover (at least!) these 18 algorithms for my « mathematical
- Not oldies but goldies: main papers of the 4 Fields medalists can be found
- Kantorovitch formulation of Optimal Transport is a linear program
- The heat equation can be applied to diffuse a function on a surface, but
- Particle system is a Lagrangian way to minimize energy over measure. For
- Training a multi-layer perceptron for classification. A large enough
- I put together a list of great algorithms. Thanks to everyone who replied
- Integrating flow field defines a diffeomorphism. Smooth vector fields are
- In 2000 Dongarra & Sullivan published the Top Ten Algorithms of the
- Eigenvalues of random matrices with iid entries converge to Wigner circle
- QR algorithm repeatedly applies QR decomposition to generate a sequence of
- The gradient field defines the steepest descent direction. The gradient
- Three ways to solve Fokker–Planck equations: PDE evolution, stochastic
- The stable marriage is solved by the Gale-Shapley algorithm. Shapley and
- The mean curvature motion is the most fundamental curve evolution.
- Fourier descriptors (Zahn and Roskies, 1972) are normalized Fourier
- I will start a « oldies but goldies » series starting tomorrow
- Eulerian (image, grid, finite elements, finite volumes, etc.) vs
- Histogram equalization is 1-D optimal transport. Linearly interpolating
- Curse and blessing of non-smooth optimization: non-smooth parts (eg
- Low frequency Fourier approximation of a signal and an image generates
- Edge detectors compute zero crossing of multiscales second-order
- Sinkhorn algorithm defines a smooth Optimal Transport loss function
- Parabolic PDEs come with lots of flavors: heat, total variation,
- The stochastic block model is the simplest random graph with communities
- Optimization algorithms come with many flavors depending on the structure
- Diffeomorphisms (warpings) are conveniently described in a Lagrangian way
- Edge collapse is a fundamental mesh processing primitive, at the heart of
- Total variation denoising (Rudin, Osher, Fatemi, 92) was studied in theory
- Fokker–Planck equation equivalently describes the movement of a random
- Stochastic gradient descent is the workhorse of many large scale machine
- Gradient flow of the Dirichlet energy is heat equation, which blurs edges.
- The Cucker-Smale system of ODEs is the simplest model of particles
- The l^p functional is convex and hence a norm for p>=1. It is sparsity-
- The gradient field defines the steepest descent direction. The gradient
- On Euclidean space, Maximum Mean Discrepancy norms between measures are L2
- Approximation of a function on the sphere using an increasing number of
- Optimal assignment in 1D is simply sorting the values. Allows one to
- Stable fluids of Jos Stam is a semi-Lagrangian solver for Navier-Stokes,
- Displacement Interpolation (Robert McCann) is the optimal transport
- Voronoi diagram can be defined on surface using the geodesic distance.
- f-divergences (KL, TV, Hellinger, Chi2, etc.) are all closely related and
- Color 3D quantization versus approximation are two ways to reduce the
- From Monge-Kantorovitch’s optimal transport to Schrodinger lazy gaz, there
- Laguerre (aka power) diagram is a generalization of Voronoi partitions,
- Normal mapping (aka bump mapping) computes pixel’s values using an
- The Wasserstein-1 distance between histograms on a graph can be re-written
- Lloyd algorithm can be applied on a surface using the geodesic Voronoi diagram.
- As shown by @arthurmensch and @mblondel_ml, differentiable max&argmax
- Entropic regularization (aka Sinkhorn) interpolates between a non-
- The hypercube (tesseract in dimension 4) is the d-dimensional cube. Can be
- Multilayer perceptron with 1 hidden layer breaks the curse of
- Vizualizing zeros and poles of rational functions over the complex plane.
- Solving a maze using front propagation (Fast-Marching method).
- Lambert and Phong illumination models are the most well known empirical
- Deep network comes in two flavors: discriminative (aka convolutional for
- Dual norms (aka Integral Probablity Metric) metrize weak convergence (aka
- Yann Brenier just introduced and studied the arctangential heat equation,
- Sinkhorn first proved the convergence of the diagonal scaling algorithm
- Difference of convex (DC) programming is a class of non-convex
- Voronoi diagrams can be defined over general metric spaces, for instance
- k-nearest neighbors is the baseline non-parametric regression and
- The Fast Marching algorithms performs a front propagation on the surface.
- Interior point methods solve approximately in polynomial time conic
- Nonlinearity matters. Linear diffusion (heat) has non-compactly supported
- The Monge-Ampère equation is a non-linear generalization of the Poisson
- Curve approximation using Fourier series corresponds to drawing the curve
- During elastic collision, particles of equal mass simply exchange their
- Network flow problem is a specific class of linear programs that finds
- Shepard interpolation generalizes nearest neighbor interpolation.
- Laguerre (aka power) diagram is an amazing tool to solve the semi-discrete
- Maximum Likelihood Estimation is a density fitting problem with a
- Csiszar divergences compare the relative densities of two measures with 1.
- Heat diffusion vs. wave equation on a surface.
- Non-circular gears generate non-regular motions from regular one. Used
- Spherical harmonics are the equivalent on the sphere of the Fourier basis.
- The Radiosity equation describes conservation of light for diffuse
- Fourier approximation of closed curves.
- Volumetric 3D heat equation smoothes all the level sets of a function.
- Multilayer perceptron with 1 hidden layer approximate functions using sums
- 3D metaballs: remind me the good old time of coding with @antoche the C++
- Tutte embedding is the solution of a Poisson equation. Iterative
- Mathematical models: part 1/2.
- Spherical harmonics is an orthogonal basis of eigenvectors of the
- Navier Stokes equation describe the motion of an incompressible viscous
- The solution of the Eikonal equation is solved by advancing a front in the
- Subdivision surfaces define a hierarchy of embedded triangulations using
- Julia sets are attraction bassins of iterated complex maps. The best known
- Heat equation vs wave equation inside a planar domain. Boundary conditions
- Interpolating coefficients of polynomials generates interpolations between
- Analyzing the global convergence of Newton is hard. Attraction bassins are
- Local averaging is consistent with the linear heat equation. Local median
- Natural neighbor interpolation (Robin Sibson) is a generalization of
- Iterative projections do not converge in general to the projection onto
- Support Vector Machine (Vapnik, Chervonenkis, Cortes) performs
- The co-area formula is the most fundamental tool of geometric measure
- A 1 hidden layer perceptron can approximate arbitrary continuous functions
- Thin plate (biharmonic) splines is an interpolation method with a closed
- Eigenvalues of random matrices with iid entries converge to Wigner circle
- « Metaballs » are levelsets of mixtures of radial basis functions, which
- The Helmholtz-Hodge decomposition split vectors fields in two orthogonal
- Gaspard Monge proved in 1781 that optimal assignment for the Euclidean
- Shepard interpolation is a surprisingly simple multi-dimensional
- Approximation vs quantization: two ways to compress.
- Lagrange and Hermite interpolations can be solved in closed form using
- Lloyd’s algorithm is the continuous counterpart of k-means. Optimizes the
- De Casteljau’s algorithm defines a subdivision’s algorithm to evaluate
- Representing complex functions using colormap and level sets produces a
- Vornoi cells on a surface using geodesic distance. Dualizes to a valid
- Parabolic PDEs (e.g. heat) smooth out singularities. Hyperbolic PDEs (e.g.
- Pruning the Voronoi diagram of a set of points is way to estimate the
- The subdifferential is the set of slopes bellow the graph of a convex
- The color palette of an image is the 3D empirical distribution of the
- The Reeb graph of a function on a manifold is the graph of level sets
- Heat equation is both L2 gradient flow of Dirichlet energy and Wasserstein
- Empirical risk minimization is a workhorse of supervised learning.
- The dual of a triangulation is obtained by joining the circumcenters of
- Metropolis-Hasting is a surprisingly simple way to sample from a density
- Finding the optimal denoising parameter is a bias-variance tradeoff.
- The geodesic distance extends a local metric into a global one. Geodesic
- The proximal point algorithm [Martinet70,Rockafellar76] is the most
- Fourier is 250 today! Fourier approximation creates Gibbs ringing
- Robert McCann's displacement interpolation is the geodesic for optimal
- The polar of a set generalizes duality between polyhedra. It is to sets
- The divergence (both continuous and discrete) is (minus) the adjoint of
- Floyd–Warshall algorithm computes all pairs shortest distances. Related to
- Fourier transforms comes with lots of flavors. Sampling and periodization
- Approximation using anisotropic triangulations performs optimal
- The gradient flow of the Dirichlet energy is the heat equation, which blur
- From a color image to its color palette. This is the empirical
- Wavelets on surfaces were introduced in 1995 by Wim Sweldens and Peter
- WGAN and VAE are respectively dual and primal approximations with deep-
- The spectrogram (aka short time Fourier transform) reveals the geometry of
- The entropy comes with lots of flavours. The relative entropy (aka
- A map is conformal (angle preserving) if and only if it is holomorphic
- 2-D Fourier atoms are tensor product of 1-D Fourier atoms. Product of a
- Reaction-diffusion are non-linear PDEs which describe the formation of a
- First official release of our book "Computational Optimal Transport"!
- Elastic net (Zou , Hastie, 2005) interpolates between the Lasso (l1
- The level set method of [Osher and Sethian, 1988] performs curves and
- Monotone operators (Minty,Browder) generalize monotone functions.
- Code words defined using a Huffman tree generate codes with optimal
- Discrete Fourier basis is simply a sampling of the continuous Fourier
- The Numerical Tours now in R! 35 @ProjectJupyter notebooks: machine
- The Brownian motion (aka Wiener process) is the scaling limit of a random
- The primary visual cortex is organised as an oriented wavelet transform
- Delaunay refinement aka Ruppert’s and Chew's algorithms. Introduced by
- Moreau-Yosida regularization smoothes a function. It is the inf-
- The Fast Marching algorithm of Sethian computes a weighted geodesic
- Lagrange and Fenchel-Rockafellar duals are (almost) the same. Crucial to
- On non-convex problems, alternating projections often stuck in local
- The medial axis (skeleton) generalizes mediatrix between 2 points. Set of
- Generalized barycentric coordinates (mean values of Floater being the
- The Delaunay triangulation is the dual of the Delaunay diagram. The most
- Histogram equalization is 1-D optimal transport. 1 line of Python code, 2
- Estimating optimal transport distance from empirical samples suffers from
- Waves on a surface!
- Be careful when training your GANs, round #2: explicit descent is
- Total variation denoising (L. Rudin, S. Osher, E. Fatemi, 92) was studied
- The l1 norm achieves the best balance between convexity and sparsity-
- Gradient descent is inefficient to find saddle point (Nash equilibrium)
- Cauchy–Binet formula generalizes the determinant of a product of square
- Optimal computation of gradients is equivalent to optimal Jacobian
- The gradient field defines the steepest descent direction. The gradient
- Only + and - operations: Walsh-Rademacher-Hadamard vs Haar. Fourier vs
- Logistic classification (regression…) defines fuzzy classification
- Heat equation on a surface and of a surface!
- Strong convexity and smoothness are the two key hypotheses to make
- Iterative Soft Thresholding algorithm to solve the LASSO is a special case
- The Fast Marching of Sethian is a generalization of Dijkstra’s algorithm.
- Lagrangian discretization of measures: quotient space topology. Eulerian
- Douglas-Rachford (dual of ADMM) often surpasses iterative projections on
- Mean value coordinates extend usual barycentric coordinates to arbitrary
- Stochastic gradient descent: +: works also for infinite sums and
- The mean curvature motion is the most fundamental curve evolution.
- Backprop in neural-networks is reverse mode auto-diff applied to a simple
- The distance field is the unique viscosity solution of the Eikonale
- Berry–Esseen theorem provides a quantitative estimation of the convergence
- Sheep Wasserstein gradient flows: the perfect illustration of the crowd
- The soft-max is the gradient of the log-sum-exp. Central to preform
- Can you hear the shape of an elephant? Eigenvectors of the Laplace-
- What is complexity of computing gradients? The optimal answer is provided
- Tutte theorem defines a valid drawing of planar graphs by solving a linear
- The machine learning Numerical Tours now in Python! Clustering, ridge and
- The Fourier transform diagonalizes convolution operators aka circulant
- Iterative projections: converge in theory for convex sets. Works great in
- Linear approximation selects the first coefficients. Non-linear selects
- Any pair of planar triangulations of n vertices can be connected by O(n)
- The product of two Gaussians is a Gaussian. Sums of Gaussians is an
- When rolling an oloid in straight line, all its surface touches the ground.
- Varadhan formula shows that at first order, the heat kernel is equal to
- The central limit theorem equivalently reads as the convergence of
- The Fast Fourier Transform: « the most important numerical algorithm of
- Haar transform: only + and - yet surprisingly powerful. The first example
- Birkhoff's contraction for Hilbert's metric: the key tool to quantify
- Orthogonal matrices come with lots of flavours!
- The codes for most of my tweets are available here:
- Why the Central Limit Theorem cannot hold in probability. A 3 lines proof
- The Benamou-Brenier geodesic! In their landmark paper, Jean-David Benamou
- Subdivision schemes: simple and convenient way to manipulate smooth curves
- A Sobolev ball is an ellipsoid in infinite dimension aligned along the
- Non-linear approximation in a Haar wavelets basis performs an adaptive
- Density estimation: kernels vs nearest-neighbors. Parametric vs. non-parametric.
- Ingrid Daubechies Orthogonal Wavelets: a mathematical gem. One of the
- The space of metric spaces is a metric space for the Gromov-Hausdorff
- Spherical interpolation (SLERP) of quaternions defines the geodesic over
- Optimal transport to simulate the dynamics of two groups of football
- Frank-Wolfe algorithm (aka conditional gradient) works over very general
- Mirror descent generalizes gradient descent using Bregman geometries.
- Gradient descent is consistent with a first order ODE. Nesterov’s
- The Weierstrass function is continuous if a<1 but nowhere differentiable
- Toland duality for non-convex programming. Surprisingly efficient if the
- Perron-Frobenius is fundamental for the study of Markov chains … and
- Encoding points as roots of polynomials and interpolating their
- Numerical Optimal Transport: slides for a course.
- Most parametric probability distributions families can be written as
- MLE minimizes an empirical KL loss. MKE minimizes an optimal transport
- Bregman divergences are convex distance-like functionals which are locally
- Geometry of Gaussians: Optimal Transport is flat (Euclidean). Fisher-Rao
- The proximal map projects on level sets. For l^p norms, it is a
- Copula is a way to picture the dependency between two random variables.
- Displacement interpolation between continuous and discrete measures, aka
- Impact of entropic regularization (aka Sinkhorn) on optimal transport:
- Comparison of explicit stepping (unstable) vs implicit stepping (stable)
- Great video by @JustinMSolomon to give the insights of Wasserstein
- Just released course notes on deep learning (covers SGD, auto-diff and
- Computational Optimal Transport: the book. All you ever wanted to know
- Every function can be decomposed as the sum of a convex and a concave
- Reuleaux polygons and Meissner bodies are particular instances of bodies
- The proximal operator is the implicit counterpart of the explicit gradient
- Boris Polyak’s heavy ball and Yurii Nesterov’s descent use momentum to
- Convergence in law of random vectors and weak* convergence of measures are
- Spirograph (hypotrochoid) vs lissajou curves: messing around with a bunch
- Gradient flows on metric spaces: define descent methods on non-Euclidean
- Tarski–Seidenberg in action: the set of polynomials having a real root is
- Convergence of random vectors comes with lots of flavors!
- Step size selection is important for gradient descent on ill-conditioned
- Wirtinger derivatives: treat complex valued functions of complex variables
- Tarski–Seidenberg theorem: semi-algebraicity is stable by projection. The
- Checking convexity of degree 2 polynomials is trivial. Checking convexity
- Functions should be manipulated by pullback, measures by pushforward.
- Gradient descent does not always converge. Trajectories can have infinite
- Vitali Milman's insight: convex sets in high dimension look like hedgehogs
- Stochastic gradient descent dynamic on a simple 1-D example.
- Just wrote course notes on the basics of machine learning. Feedback welcome!
- Heartbroken by Wasserstein barycenters!
- Lloyd algorithm (k-means) with non-uniform sampling density.
- Stochastic gradient descent for the semi-discrete Optimal Transport,
- The circle, semi-circle and Marčenko–Pastur laws are three simple examples
- The maximum of n i.i.d. Gaussians is roughly sqrt(2log(n)). This is where
- The Marchenko-Pastur law is the distribution limit of eigenvalues of
- Signs of sparse vectors recovered by l1 minimization index a spherical
- If d(x,y) is a distance, then d(x,y)^p for 0<p<1 is its snowflake
- Optimal Transport barycenters.
- Summing the log of the nearest neighbors distances: a simple Lagrangian
- Fokker-Planck equations are Optimal Transport flows of entropies. Nice
- Eulerian vs Lagrangian discretization of a probability distribution. Two
- Rotating airfoil using Joukowsky conformal map. Code provided!
- Joukowsky conformal map: solving airfoil design in the pre-computer era.
- Durand–Kerner method: surprisingly simple way of computing all the roots
- The QR algorithm is a gem from numerical matrix analysis. Showing
- The unreasonable effectiveness of non-smooth optimization: sharp
- "Sinkhorn Autodiff" paper updated to insist on the core idea of Sinkhorn
- I knew it! Ice-creams, positive polynomials and PSD matrices all have the
- Semi-discrete Optimal Transport is a multiclass SVM. Sinkhorn version is
- I am putting here material (slides/code) related to my tweets. Will be
- n-ellipses generalize ellipses with n foci. Spectrahedra (hence convex)
- Spectrahedra are matrix generalizations of polyhedra. But the class of
- Heat eq on polynomials generates nice roots dynamics. Post by Terry Tao.
- Discrete Cosine Transforms are eigenvectors of Laplacians with different
- Fractional derivative of a Gaussian using the Fourier transform. Code provided.
- "Twisted" Newton iterations generate nice fractal patterns! Colors display
- Understanding global convergence of Newton method is hard ... Code provided.
- Moreau's decomposition generalizes orthogonal decomposition from linear
- Reasonable properties (convexity, differential, stratifiability, partial-
- Fourier is to convolution what Legendre is to inf-convolution.
- Kurdyka-Łojasiewicz property: fundamental to show convergence of descent
- Partial smoothness [Lewis]: the correct notion of "piecewise smooth
- Mirror-stratifiability (Drusvyatskiy/Lewis): generalizes duality of
- Optimal Transport was formulated in 1930 by A.N. Tolstoi, 12 years before
- "An Introduction to Imaging Sciences": a small book for a large audience
- The "dead leaves" model of Georges Matheron applied to the Eiffel Tower.
- Introducing the "Mathematical Tours of Data Sciences". Book (work in
- Our preprint on "Multi-dimensional Sparse Super-resolution". Detailed
- The Optimal Transport geometry of 1-D Gaussians is flat in the (mean,std)
- Motzkin's polynomial is positive but is not a sum of squares. #LifeIsHarderIn2D
- The Legendre transform of the Wasserstein distance has a nice expression!
- Slides for my talk "Optimal Transport for Imaging and Learning"
- Cuturi's Sinkhorn Divergence interpolates between Optimal Transport and
- Super-resolution from M/EEG-like boundary measurements is achievable for 2
- Some insights on using Optimal Transport as registration loss in our
- Comparing distributions using kernels: heavy tail vs. local (Gaussian) kernels.
- Any optimization program is a equivalent to a convex (linear) one.
- De Boor least interpolant: the way to do multidimensional polynomial
- Evolution of interpolation as points cluster together. 1D=simple to
- Wasserstein-2 and Energy Distance (Sobolev H^-1) are equivalent for
- Numerical Tours on #MachineLearning: regression, classification,
- Last but not least Numerical Tour on Machine Learning: Stochastic Gradient
- New Numerical Tour: Linear Regression and Kernel Methods
- New Numerical Tour: PCA, Nearest-Neighbors Classification and Clustering
- New Numerical Tour: Logistic Classification. Binary, multiclasses,
- The Numerical Tours are now available in @JuliaLanguage! Wavelets, meshes,
- Oldie but Goldie, Cominetti & San Martín 94, detailed analysis of
- Python code for our @miccai2017 paper "Optimal Transport for Diffeomorphic
- Comparing probability measures: vertical vs horizontal displacent /
- Dual weak norms (aka Integral Probability Metrics): a unifying way to
- Displacement interpolation aka Optimal Transport in 1D is equivalent to
- Bures distance on PSD matrices, aka Optimal Transport between Gaussians,
- Displacement interpolation aka Optimal Transport: discrete vs continuous.
- Wasserstein-1 distance (norm!) between measures on graphs: equivalent to a
- Csiszár divergences, a unifying way to define losses between arbitrary
- From deterministic to stochastic matching: Schrödinger problem. See
- Sensitivity Analysis for Mirror-Stratifiable Convex Functions: toward
- A simple way to produce infinite zoom, run 2D auto-regressive model and
- Statistical Learning is Inverse Problems with a noisy covariance matrix /
- L2 vs L1 regularization for inverse problems / correlated designs.
- Slides for my talk "Optimal Transport and Deep Generative Models" tomorrow
- "Optimal Transport for Diffeomorphic Registration", just plug an OT loss
- Looks like the syllabus for my course next year will be quite heavy!
- GAN and VAE from an Optimal Transport Point of View
- Woodbury formula: so simple, so useful ...
- Understanding when the Continuous Basis-Pursuit is better than the Lasso
- Sinkhorn-AutoDiff: Tractable Wasserstein Learning of Generative Models
- Le poster "Le transport optimal et ses applications" (visible au salon
- Optimal Transport: from Theory to Applications!
- Reverse mode auto-diff in 1 slide. So simple. So useful.
- Le transport optimal pour des applications en informatique graphique, par
- Some more GPUs for Optimal Transport!
- Oldie but goldie: Matlab code for non-circular gear generation.
- After 2Y in review, "Model Consistency of Partly Smooth Regularizers"
- Google Scholar citations for "Optimal Transport". Soon trendier than Deep
- Going off-the-grid for super-resolution. Surprisingly effective! Slides
- Iterative soft thresholding for deconvolution: slowly but surely
- The Quantum Optimal Transport project: my largest coding effort ever...
- Unbalanced optimal transport made simple: handling mass creation &
- Quantum Optimal Transport for Tensor Field Processing: paper and code
- My new ERC² project NORIA "Numerical Optimal tRansport for ImAging"!
- View from my new office on the 4th floor @ENS_ULM
- Our preprint on "Bayesian Modeling of Motion Perception using Dynamical
- Slides et vidéos de la conférence Shannon au @_CIRM
- Sparse Support Recovery with Non-smooth Loss Functions: l^1, l^2 and l^inf
- Transparents de mon exposé grand public "compressed sensing" pour la
- Repackaging of the Wavelet Tour's book website
- 20 new Numerical Tours in Python: Wavelets, sparsity, snakes, 3D meshes
- Claude Shannon et la compression des données : mon article sur @ImagesDesMaths
- My new office @ENS_ULM !
- Matlab code and Jupyter notebook showcasing Shannon theory of entropic coding
- One Sinkhorn's algorihtm to rule them all: unbalanced optimal transport
- Mon article de vulgarisation sur Claude Shannon et la compression des données
- Sugiton's calanques processing
- "Parcimonie, problèmes inverses et échantillonnage compressé", à paraitre
- New Python "Numerical Tours" on optimization by Laurent Condat: first
- Computing barycenters over the metric space of metric spaces ... our ICML
- New paper: Optimal Transport meets Stochastic Optimization
- Our second optimal-transport-related paper at #SIGGRAPH2016: Entropic
- Optimal Transport strikes back at #SIGGRAPH2016: Wasserstein Barycentric
- Les slides de mon exposé "grand public" à l'occasion du centenaire de la
- Staircaising at its best: geometrical performance analysis of Total
- Entropy at its best: theoretical foundations of entropic optimal transport
- Optimal transport made easy using entropic regularization: a new numerical tour
- Toward a definitive answer to unbalanced optimal transport problems
- Sharp performance guarantees for sparse positive spikes deconvolution #sparsity
- Entropic optimal transport made it to SIGGRAPH! #siggraph #optimaltransport
- Web site dedicated to my grand father Enjoy the paintings !
- New paper on support (in)stability for sparse deconvolution and compressed