The document summarizes techniques for handling missing values in recommender models. It discusses how gradient boosted decision trees (GBDTs) and neural networks (NNs) can deal with missing features during training without imputing values. For GBDTs, XGBoost and R's GBM handle missing values differently, with XGBoost sending examples left or right and GBM using a ternary split. NNs can handle missing features via techniques like dropout, imputing averages, or including a "missing" embedding value. The document concludes that the optimal approach depends on the dataset.