This document proposes a new method for human action recognition using contour history images extracted from silhouettes, tracking of the body's center movement, and the relative dimensions of the bounding box containing each contour history image. Features are extracted and reduced using three different methods: dividing the contour history images into rectangles, a shallow autoencoder neural network, and a deep autoencoder neural network. The reduced features are classified using a neural network classifier. The proposed method achieved a recognition rate of 98.9% on a standard human action dataset, demonstrating its potential for real-time human action recognition applications.