We show that when XCS is applied to complex real-valued problems, the XCS populations contain structural information. This information exists in the underlying classifier space as the degree of uncertainty associated to the problem space. Therefore, we can use structural information to improve the overall system performance. We take an information theoretic approach, introducing a new entropy model for XCS to extract the structural information from dynamically forming substructures. Using this entropy model, we can collectively emphasize or de-emphasize the effect of an individual input. For a complex problem domain, we chose the speaker identification (SID) problem. The SID problem challenges XCS with a complex problem space that may force the learning classifier system to evolve large and highly overlapping population. The entropy model improved the system performance up to 5-10% in large-set SID problems. Furthermore, the entropy model has the ability to assist the population initialization in the beginning of the learning process by assuring a certain level of overall diversity.