Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.