This document discusses important issues in machine learning for data mining, including the bias-variance dilemma. It explains that the difference between the optimal regression and a learned model can be measured by looking at bias and variance. Bias measures the error between the expected output of the learned model and the optimal regression, while variance measures the error between the learned model's output and its expected output. There is a tradeoff between bias and variance - increasing one decreases the other. This is known as the bias-variance dilemma. Cross-validation and confusion matrices are also introduced as evaluation techniques.