The algorithm enables efficient sampling from composite log-concave distributions with convergence guarantees that scale well with dimension and condition number, useful for Bayesian inference and optimization problems.
This paper presents a new algorithm for sampling from complex probability distributions that combine two functions (f and g). The method uses gradient information and a special sampling oracle, achieving efficient convergence rates that match the best known results. The approach extends to non-smooth functions and distributions with weaker mathematical properties.