This document summarizes a method for harnessing deep neural networks with logic rules. The goal is to incorporate general rules and human intuitions into neural networks. Rules are expressed using first-order predicate logic and incorporated into training as constraints. The method alternates between calculating the model distribution subject to constraints (q(y|x)) and updating the model parameters (θ). Experiments on sentiment analysis and named entity recognition show the approach improves performance by enforcing linguistic rules during training.