This document describes using MapReduce to efficiently compute scientometrics like the h-index on large academic datasets. It introduces four MapReduce algorithms to parallelize the computation. The most efficient approach uses an in-mapper combiner to create a list of <paper, score> pairs for each unique author during the map phase. This reduces running times by at least 20% and bandwidth usage by around 13% compared to alternatives. Experiments on a 1.8 million paper dataset showed this first method performed best both in terms of runtime and I/O sizes.