As one moves between the general and the specific, two things that vary are (1) accuracy and (2) reach. Accuracy: how well your policy, design, idea, opinion describes the real facts of reality, how few exceptions and edge cases there are. Reach: how many facts of reality or units or instances your policy, design, or idea describes or accounts for. Accuracy is related to depth and focus; reach is related to breadth and speed.
Both are important, and they're in necessary conflict.
This conflict is a function of information density: to communicate or write anything with meaning concisely —human language, artwork, or computer code— requires various forms of abstraction, all of which exchange information-denser realities for more processable, less informationally dense metaphors or representations we can work with. It may also be fair to say that this kind of "density" is really more a matter of physical time and processing limitations (again whether we mean speech or any other information-bearing thing) than real physical density. Because we mostly communicate in language, we often stack abstractions, too; being linguistically concrete doesn't mean we're being conceptually concrete, or describing reality accurately at all.
Reach is also about leverage: one (or few) solution(s) or idea(s) for many units. Accuracy is about eliminating cases of error. With ideas, policies, and designs, accuracy and reach are likely to be in conflict, unless they're based on true explanatory models. This is because a truly explanatory model predicts every case, accounts for every case, but not with an abundance of information. An explanatory model can be light, in information density terms, but have incredible reach. It doesn't describe reality; it mirrors in its internal relations —among its components— the relations of the entities in reality it describes. Moreover, its accuracy is not so much a matter of technology in observation or control but of its hewing to reality. We do not need a larger sample size or better microscopes to understand why lightbulbs work, nor do we have any real margin of error in our conclusions about them.
There's enormous and increasing pressure on humans to achieve reach in their ideas, designs, morals, and policies. Despite having evolved in small groups with small-group habits of cognition and emotion, we now live in a global group and must coordinate hugely complex societies. The problems we face are problems at scale. Thus: reach is mandatory. A taxation, software design, or criminal justice solution that cannot be deployed at scale isn't useful to us anymore; indeed, even opinions must scale up. For personal, political, governmental, commercial, literary, expediency-oriented, and many other reasons, we must have solutions that work for more human (H) units / instances, and H is always increasing (even as every sub-member of H is determined to be respected according to her or his unpredictable inimitability, range of action, moral agency, autonomy, freedom, etc.).
This pressure often inclines people to accept induction- or correlation-based models or ideas, which are inaccurate to varyingly significant degrees, in lieu of explanatory models. That is: in many situations, we'll accept aggregates, groups, central plans, reductions, otherings, dehumanizations, short-hand-symbols, and so on because (1) they serve our ends, sometimes without any costs or (2) we have nothing else. In order to have explanations with reach in areas where we have no models, we commit philosophical fraud: we transact with elements and dynamics we cannot predict or understand and we hope for the best (better, it seems, than admitting that "I don't know"). How we talk about speculative models, reductive schema, and plural entities —peoples, companies, generations, professions, events even— reveals a lot about how much we care for epistemological accuracy. And not caring about it is a kind of brutality; it means we don't care what happens to the lives inaccurately described, not captured by our model, not helped by our policies, unaided by our designs, not included in our normative plan.
In politics, design, art, philosophy, and even ordinary daily thinking, being consciously aware of this tension, and of the pressure to exchange accuracy for reach, is as important as recognizing the difference between "guessing" and "knowing." Otherwise, one is likely to adopt ideas with reach without recognizing the increased risk of inaccuracy that comes with it. One will be tempted to ignore the risk even if one knows it, tempted by how nice it is to have tidy conceptions of good and evil, friend and foe, progress and failure.
Reach is innately personally pleasing in part because it privileges the knower, whose single thought describes thousands or millions of people, whose simple position circumscribes civilization's evolution, the history of religion, the nature of economics, the meaning of life. Exceptions be damned! But in general, if an idea has significant reach, it must be backed by an explanatory model or it will either be too vague or too inaccurate to be useful. And if it's a political or moral idea, the innocent exceptions will be damned along with the guilty. Hence the immorality of reduction, othering, and inaccurate ideas whose reach makes them popular.