This talk demonstrates a 145-fold speedup in training time for a machine learning pipeline with tidymodels through 4 small changes. By adapting a grid search on a canonical model to use a more performant modeling engine, hooking into a parallel computing framework, transitioning to an optimized search strategy, and defining the grid to search over carefully, users can drastically cut down on the time to develop machine learning models with tidymodels without sacrificing predictive performance.