there are known knowns, known unknowns, unknown knowns and unknown unknowns about any situation. the first three situations, by definition, provide us with 'something' that helps us to understand the risk in that situation. risk measures are constructed to evaluate the chance, size and other characteristics of the risk. then we know, if the risk does occur, what penalty and loss we face. the human mind hates uncertainty.
the fourth situation provides us with no knowledge to work with.. in this (fourth) situation, we do not understand what the risk is or what the consequence of the risk is, nor when it will occur, or other characteristics. by following the same path as before (constructing risk measures, assessing size of risk, etc), we are in an even worse position than that of doing nothing, because we are lying to ourselves and others around us. this is 'charlatanism'. we do not understand the risk and our lying to ourselves by saying that we do.
people in risk management will not really understand what i wrote above. why? because they will tell you 'well... what else do you propose?'. this style of thinking is pathetic, because that person's mind does not understand what 'unknown unknown' means and still thinks that situation can be explained - hence they ask for an alternative solution. these savants never ask themselves if the solution even exists, in the first place. the best risk managers or best risk blah blah won't be found churning away numbers in spreadsheets or creating models, they are people who, as Taleb and others put it, have 'skin in the game' and don't spend time saying "well if you think the Value at Risk model is shit, what do you propose?". instead, they assess risk themselves, build a belief about how to assess risks and take actions. they face consequences and learn from them.
there are other reasons as to why Taleb has issues with risk management and calls it bullshit, leaving alone the sole aspect of Value at Risk and other models. to truly understand something, one should always look into the roots into how that thing came to existence (never look at the tree, look at the root, etc)
imagine an experienced and wise antiques dealer who knows his business. he knows that he has some risks that he can understand others that come from no where. he has suffered the consequences of a risk occurring to him and has learned. he will have an intuition about when a risk is likely - through his own mechanisms. this is called knowledge. now imagine, that the sale of all antiques becomes regulated and the regulator tells our friend (the dealer) that all of his deals must pass a 'risk measure'. our friend doesn't know what a risk measure is, but he does know his business. over time, he forgets his business and learns the risk measure. this change of knowledge is problematic -> years of market mechanisms suddenly get lost.
using the example above, one can see how value at risk and other risk models succeed in 'destroying' knowledge, not 'creating' knowledge. Taleb and other traders point this out in one form or another.
Taleb barks on about stable processes (Levy processes, say), which have thicker tails and put a higher likelihood on "one off" events occurring. of course he is not unique to appreciate stable processes -> they have been studied for a long time and have many desirable properties. but the antique dealer doesn't give a shit about stable processes, the risk manager doesn't know what the Levy-Ito decomposition theorem is, and the directors don't even know what the numbers mean. so it is pointless. too big to fail. the best advice? stick to simple models, understand the limitations of VaR and ensure that regulators have power to punish those who are morons.