Tumgik
belranto · 5 years
Text
Lessons for Asset Allocators in a Bottle of Wine
Albert-Laszlo Barabasi’s new book, The Formula (https://amzn.to/2RIEwqK) is an interesting and engaging fact-driven analysis of the way networks are influential in achieving success across many fields of endeavor. 
Central to his arguments are (1) performance and success are different animals, and (2) performance drives success only when performance can be measured accurately, otherwise networks drive success. Barabasi focuses his analysis on areas where performance metrics are poor or nonexistent. 
One particularly interesting insight is that expert judgments of relative quality are sucky where they seem to matter most, specifically in determining the relative rankings of high quality alternatives. In one example he draws from research on wine judging in competitive tastings.
Wines submitted to competitions are in all likelihood samples from the upper tail of the distribution of wine quality. Therefore, differences in quality among these wines will be small, because there is a practical upper bound on wine quality. Practically speaking, the overall quality differences among the wines are too small at the upper tail of quality for reliable ranking. 
The research cited by Barabasi suggests expert judges are very inconsistent in their rankings of these high quality wines.  All the effort expended on ranking yields essentially random rankings among the high quality wines.
Barabasi’s conclusion in this case is that the best strategy for a consumer is to stay away from the low quality wines and to buy the cheapest of the set of highest quality wines, subject to one’s preferences over specific wine characteristics (e.g., red/white, fruity/earthy, etc.).
Let’s apply this insight to the task asset allocators face when deciding how to allocate (rank) among competing hedge fund opportunities.
Performance measures for hedge funds are noisy, in part because common measures don’t distinguish skill from exposure to market beta or luck from chance events. Performance measures for new or recent start-ups are even less reliable, because they lack sufficient data. In these cases, expert judgment trumps performance.
But how good can expert judgment be? 
The set of new or recently launched funds is not a random sample from the universe of individuals with trading experience. HF start-ups will, by and large, represent the upper tail of the distribution of trading talent (at least measured by realized prior performance). HF hopefuls will not market successfully with a pedigree of mediocrity. 
It is possible that expert judgment applied to the allocation decision will be very inconsistent in ranking these funds.  If this is the case, the best decision rule may be as follows: 
Put aside the limited performance data available. Then subject to one’s preferences over specific fund characteristics (e.g., strategy, geography, turnover, etc.) allocate to the funds have the lowest fee structures and (secondarily) will permit the most reliable performance measurement going forward.
1 note · View note
belranto · 8 years
Text
The Power of Preconception
Preconceptions lead us toward quick judgment, suspension of critical analysis, and underweighting discrepant information. Here is a compelling illustration.
In 1979, Keith Critchlow published a monograph on the status of science in the Neolithic period, Time Stands Still: New Light on Megalithic Science. In chapter 7 he discusses five stone balls carved during the Neolithic period and found in Scotland. Based on an examination of the carvings he concludes that the balls represent each of the five Platonic solids and, as a result, the five solids must have been known to humans 1,000 years before Plato discussed them in his “Timaeus” of about 360 B.C. Pretty cool, if true.
In 2012 David R. Lloyd published a critical examination of Crichlow’s claim, called “How Old are the Platonic Solids?” (reprinted in The Best Writing on Mathematics 2013). In this article he concluded that the evidence does not support any aspect of Crichlow’s claim (other than that the referenced stones are indeed from the Neolithic period).
It is possible that Critchlow perpetrated a hoax, but it is also possible that Crichlow’s prior beliefs about the state of Neolithic science were so strong that those beliefs drove his interpretation of the physical evidence. We may call this an extreme case of confirmation bias.
Critchlow published a picture of these five balls, which is reproduced here. In the picture Critchlow had added tape to highlight the shapes of each of the balls.
Lloyd’s analysis found, among other things,
1. The balls were likely found in different locations, not together.
2. The taping of different balls followed different rules, which affects the visual perception of the carvings
3. The patterns on two of the five balls were actually the same (once viewed without the tape)
4. When the total population of all such balls found is analyzed, a. Many other patterns than these four are found, and b. The relative frequencies are far from uniform, with the simplest patterns being far more common
5. An alternative explanation for the shapes that would not have required knowledge of the geometry of the Platonic solids
Critchlow was constructing an argument, of which the analysis of these stones was but one part, that Neolithic humans had greater mathematical knowledge than had been previously recognized.
Critchlow was looking for confirming evidence, and he found it in his interpretation of these stones. He was already mentally prepared to find evidence of early mathematical knowledge and when he saw stone ball carvings with shapes close enough to a pattern known to him, namely the Platonic solids, he then fit a narrative to those shapes that confirmed what he was already prepared to believe.
0 notes
belranto · 8 years
Text
Tactics for Thinking (Critically)
I recently served as a judge on two student essay competitions. The submissions I saw displayed an almost complete absence of what John Dewey would call "reflective thought", i.e., critical, analytical thinking.
Absent reflective thought, perception of a problem and the space of potential actions are constrained by the problem's appearances.
Absent reflective thought, individual details are taken to be true on their face. The set of details viewed together is accepted as a complete description of the relevant environment.
Absent reflective thought, decisions are made on the basis of appearances and are potentially misdirected or inadequate.
In this post I am sidestepping any discussion of how critical thinking relates to cognitive processes and cognitive biases. I am avoiding all philosophical “Why” questions. Below I simply suggest a checklist approach for success at critical thinking.
Tactics for Thinking (Critically)
Demonstrate that you understand the problem by stating it in your own terms and explaining how the problem fits into some broader, relevant conceptual context.
Explain why you chose a particular approach for evaluating the problem. Explain why potential alternatives were discarded or why even though they might have been conceptually better, you chose nevertheless not to use them. Identify potential weaknesses of the approach taken and how awareness of those weaknesses affected the analytical approach.
Identify the assumptions that were explicit and implicit in the definition of the problem, what were the assumptions you made in interpreting the problem, and how you chose a conceptual or mathematical framework for attacking the problem. State any assumptions made in setting any parameters chosen during the analysis stage.
Detail your decisions about which and how much data to use in your analysis. Describe how your data differed, if at all, from the data you might have liked to have used. Describe how you dealt with data quality in your data, such as missing data or outlier data. Explain your thoughts on the possible impact of your data decisions on the results of or insights from your analysis.
Distinguish, both in the given statement of the problem and in your analysis of the problem, important elements driven by intuition (metaphysics) from those based on deductive reasoning (logic) or induction from data analysis (empirical science).
Acknowledge uncertainties that affect the conclusions reached. Conduct sensitivity analysis to identify how conclusions are affected using different parameter values. Discuss alternative plausible scenarios (and assign them subjective probabilities). Acknowledge alternative possible interpretations of the results of your analysis, and discuss potential ways to differentiate between these and your interpretation of the analysis. Evaluate the extent to which the results are dependent on the particular set of data used.
0 notes
belranto · 8 years
Text
Perceptions and misperceptions in Risk Management
The right answer to the wrong question isn’t worth much. Fixed perceptions can bar the path to asking the right questions.
Risk management has as its goal improving decision making in situations of uncertainty. We see the success (or failure) of risk management in achieving this goal as fundamentally dependent on the collection and manipulation of data. We are less likely to explicitly identify the interpretation of data, the choice of data, or the choice of data manipulation methods as also affecting the success of risk management.
However, data selection, manipulation and interpretation are framed by a risk manager’s domain expertise (familiarity with the environment under scrutiny and a known set of quantitative tools proven to be useful in the past).
Domain expertise is, at bottom, a set of ideas about how the world works, how the world can be understood, and how future outcomes (or their relative probabilities) can be manipulated. These ideas, or preconceptions, powerfully affect the solutions risk management will gravitate toward.
Below I discuss specific ways that preconceptions may lead to misperceptions of risk, and suggest mitigants that may limit the potentially adverse consequences of misperceptions in risk management decision-making.
To begin, here is an illustration of the power of preconceptions to influence decision-making.
On the Power of Established Perceptions
At the end of the 19th century, one of the great challenges in naval warfare was the inaccuracy of large guns fired from one vessel toward another. The motion of the rolling sea, making both targeting difficult and firing slow, caused this problem.
Admiral Percy Scott of the British navy innovated a solution by modifying the operation of naval guns in two ways. The first was to change the gear ratio on the elevation mechanism to make it easy for the operator to manipulate the elevation quickly. The second was to change the telescopic sight on the gun so that it would not recoil with the gun when the gun was fired. These modifications enabled the gunner to keep sighting the target continuously, and enabled what he called “continuous-aim firing.”
The effect was dramatic. Prior to this innovation an 1899 Naval exercise in which five ships each fired for five minutes at a floating target at the then conventional range of 1600 yards (just under 15oo meters) scored two hits in their twenty-five combined minutes of firing. Six years later a single, Scott-modified gun firing at a similar target recorded fifteen hits in one minute, half within a bull’s eye about 4 feet square.
In 1900 Admiral Scott shared his innovation with a junior officer of the US Navy, Lt. William Sims, who then replicated Scott’s innovations and results on a US Navy ship. Sims advocated for this innovation, preparing and submitting to the Navy Department in Washington DC several reports describing the gun modifications made and the results obtained.
The response to Sims’ work came from the Chief of the Bureau of Ordnance. The response stated that continuous-aim firing is not possible.
The Bureau of Ordnance was responsible for approving the gunnery equipment then used on ships. Sims’ reports provided evidence that was contrary to the judgment of the Chief of the Bureau that the guns he had approved were the best guns possible for use in the US Navy. The Bureau Chief’s reply stated that any problems with targeting must be the result of the gun operators, not the equipment. Only a direct appeal by Sims to the US President, who one could assume had no prior perceptions about naval gunnery, resulted in the implementation of this innovation.
The problem isn’t simply that the Navy exhibited an aversion to change. Rather, the problem is that preconceptions held by the Navy bureaucracy blinded them to a solution to the gunnery problem that required objective evaluation of data that contradicted their prior beliefs.
What is thought to be an axiom may be an assumption
One of the common criticisms of risk management as it was practiced leading up to the financial crisis is that risk models for structured real estate securities assigned zero probability to outcomes in which prices fall. Perhaps that criticism is founded on 20/20 hindsight. Nevertheless, it is notable that within at least some models in use, it was not possible to ask what happens if house prices fall.
Once a paradigm is adopted certain questions no longer can be asked because they have no meaning within the adopted paradigm.
Consider the implications of this in the following illustration from geometry. Greek geometrician Euclid in 300 BC published a compendium of what was then known in geometry. So influential was this book that, speaking generally, Euclid’s book, The Elements, became the standard geometry resource for more than 1,500 years.
The Elements begins with 5 postulates and several definitions. A postulate, or axiom, is something taken to be self-evidently true (by every reasonable person) and therefore is something not requiring proof.
Euclid’s 5th postulate is different from the others in that mathematicians questioned whether its claim was self-evident.
Euclid’s 5th postulate says, draw two lines (in a plane) and then drop a third line so that it lies across the first two, if the two inner angles formed by that third line sum to less than 180 degrees than the first two lines must, if extended far enough, intersect.
That last bit, “extended far enough,” is where a problem arises. The “parallel postulate” as it is called, is not self-evident because you may not be able to convince yourself that its claim is self-evidently true. How would you convince yourself if the sum of the interior angles were 179.999999 degrees or 180.000001 degrees for example?
If the 5th postulate is not self-evidently true, then it isn’t an axiom. This is a big problem, because some of the theorems in Euclid relied on the 5th postulate for their proof. All would be well, if it could be shown the 5th postulate is derived from the other postulate. Unfortunately, no one was able to prove that was the case.
The conclusion drawn from all this effort is that the 5th postulate is actually an assumption. This is key for thinking about the impact of perceptions. Because the parallel postulate is actually an assumption, a question can be asked that could not have been asked otherwise. That question is, what happens if the “parallel assumption” is replaced with something more general.
When mathematicians experimented with relaxing the parallel assumption, they discovered (among other things) elliptical geometry (geometry on a sphere), which, as it turns out, describes the world in which we actually live.
Relaxing the parallel assumption revealed (among other things) that distance measured in Euclidean terms was different than distance measured in other geometries. In elliptical geometry distance is determined using something called the haversine formula (https://en.wikipedia.org/wiki/Haversine_formula), whereas in Euclidean geometry the familiar Pythagorean theorem (https://en.wikipedia.org/wiki/Pythagorean_theorem) is used for measuring distance.
Because Euclid mistook an assumption for an axiom, all who worked within the Euclidean paradigm were blind to the fact that different answers to geometrical questions might be possible.
It is a good idea to make explicit and test for self-evident-truthiness those preconceptions of what is axiomatically true in any risk management decision. The goal here is enable the risk manager to ask the broadest range of questions about the risky environment.
Euclid’s problem suggests another, perhaps more insidious, risk management risk arising from preconceptions about what is an axiom.
What was at first thought to be an assumption later may be treated as an axiom
When models are first developed the embedded assumptions are clear in the minds of both the model developers and early adopters. For a successful model, its users over time may begin to perceive the model’s assumptions more as if they were axioms. While universal truths don’t change, the validity of assumptions may. Assumptions must be brought out and reexamined regularly to reconfirm their appropriateness. This will not happen if an assumption, as a result of prior successes, is implicitly accorded the status of an axiom.
In the WSJ on January 14, 2016, Greg Ip criticized faulty modeling assumptions of “perpetual” growth and “unlimited” demand for commodity inputs in China and “perpetual” increases in oil prices supported by Saudi Arabian policies.
All assumptions are by their nature faulty with respect to some aspect of how the world works. The usefulness of assumptions depends on the trade-off, as perceived by the user, between the gain in problem simplification achieved by making the assumption and the damage done in abstracting from some potentially impactful aspects of the way the world works.
Ip was in fact deploring the use of these assumptions, because he felt they reflect, in the current environment, a poor trade-off between simplification and damage. By calling out this “peril”, he was pointing out what can result when, once made, assumptions once made are not subsequently subjected to critical analysis.
When model assumptions are not actively re-examined, model users will show a lack of attention to disconfirming evidence. For example, it is easy to explain away portfolio risk model backtest exceptions as representing idiosyncratic or even expected behavior. The regulatory mandate that banks face for portfolio risk model backtesting and the penalties that can be imposed when models perform poorly act to enforce the scrutiny of modeling assumptions.
To mitigate this perceptual problem, assumptions should be stated explicitly and a formal process should be established for re-validation of assumptions.
Data do Not Speak for Themselves; We Speak for the Data
It is sometimes said that statistics don’t lie, but, really, they don’t truth either. It is a trick of argument to assign anthropomorphic properties to statistics, as doing so permits us to attribute objectivity to them. The attribution of objectivity of risk statistics disguises (even from ourselves, sometimes) both the subjectivity of choices made that lead to their calculation and the subjectivity of our interpretations that follow their calculation.
In a Value-at-Risk calculation, assumptions about the generating process for historical data (e.g., the model assumes normally distributed returns or constant volatilities) affects the values of statistics calculated using the model. Interpretations of the VaR results are both informed by our knowledge of the world (e.g., is this a risk-off environment) and colored by our preconceptions about the way the world works (e.g., is the market mean-reverting).
The impact of preconceptions on interpretation is illustrated by a papery by Bollen and Whaley (Journal of Finance (2009)). The authors look at the cross-sectional distribution of monthly hedge fund returns and find the distribution shows a smaller than expected frequency of returns around zero. They conclude this is “caused at least in part by temporarily overstated returns” in order to keep or attract investor money. That is, they conclude hedge funds on average commit fraud by manipulating returns. In their paper Bollen and Whaley fail to consider a simpler, albeit less dramatic, alternative.
Hedge funds regularly report to investors the ratio of up months to down months as a measure of performance. Managers have a limited ability to affect this performance measure by altering the level of fund risk in response to recent performance (in other words create a feedback effect). A manager may choose to cut risk approaching the end of a reporting period, if the fund is in a small positive return position, in hopes of holding on to a positive month thereby. The manager is effectively giving up the chance of additional positive return in order to reduce the chance of a negative return at month-end. Analogously, a manager with a small negative return approaching month-end may choose to increase fund risk in the hope of moving the fund to a positive return by period-end. The observational evidence of this behavior is consistent with the data in the Bollen and Whaley paper.
While this alternative behavior reflects a problem with the incentives of the contract between managers and investors, it is not fraud. This simpler explanation does not come into consideration perhaps because there is at the ready a different interpretation more consistent with the popular, widespread perception of hedge fund managers as ethically suspect.
To guard against the effect of perception on choices made in data analysis and on interpretation of statistical results, it is worthwhile to consider (or at least explicitly acknowledge the possibility of) the impact on results of alternative methods of analysis and alternative interpretations of the results.
Pattern matching – Don’t jump to Judgment
When you encounter a new situation requiring decision-making your mind is first engaged in an attempt to make sense of it. First attempts to make sense of a situation take the form of matching the available information with things you already “know.” Finding a match within your existing knowledge accelerates decision-making. Your mind wants to jump to conclusions – all the time. Making quick judgments is valuable as a survival mechanism. Our brain’s ability to analyze unfamiliar situations quickly is an evolutionary mechanism that gives us a survival advantage that compensates for our relatively feeble physical abilities compared to, say, a mastodon or saber-tooth tiger.
However, the fight-or-flee decision so important when facing potentially life-threatening situations is not well adapted to more common situations requiring decision-making. Thus the automatic matching process also entails the risk of coming to the wrong assessment of the situation by rushing too quickly to judgment.
When matching, we focus on the information that is most familiar and most readily accords with your preconceptions about the way the world works. We discount or even ignore unfamiliar or discrepant information if it gets in the way of what would otherwise be a coherent construction of the situation. We may also unwittingly draw from our existing stock of knowledge to “fill-in” bits of a picture that are completely absent from the current situation if doing so helps create coherence.
In risk management, identifying an emerging risk or a change in the risk environment or a breakdown in an investment thesis is a particularly big challenge to our pattern matching natures. This is because an established view, like other preexisting knowledge, closes the door to possible alternative perspectives on the environment.
We can mitigate this perceptual problem by deferring judgment.
Postponing judgment is something we have to do consciously, however. Institute a process that requires explicitly identifying and presenting evidence that is discrepant with the view that is attractive. Once you have adopted a view on a problem, look for disconfirming evidence, and discriminate against confirming evidence.
Finally, try to construct scenarios in which your story will break down and assess the robustness and relevance of your view based on the specifics of those scenarios. Reverse stress testing can be a helpful tool here, as in principle it can neutralize the effect of cognitive biases that might enter into the common way of designing stress tests.
Do Not Extrapolate except with Great Care
Data-driven models of uncertain futures can generate significant insights into a situation, but those insights may not apply to a situation that is outside the range of the data that drives the model’s predictions.
On January 9 and 13, 2016, the Powerball lottery jackpots exceeded all historical experience, approximately $1B and $1.3B. Walt Hickey of fivethirtyeight.com discussed the importance of this for predicting ticket sales (and hence the also predict probability of a winning ticket being sold). Hickey has an econometric model to predict ticket sales based on jackpot size and the number of tickets sold . The parameters of his model are estimated using historical data on jackpot size and the corresponding number of tickets sold. Hickey knew his model might not predict well in the current situation.
“[B]y plugging $800 million into the model, we’d estimate 428 million tickets sold and a 77 percent chance of at least one winner,” he wrote.
“But since this is the first time we’ve had a jackpot this large, that estimate is based on an extrapolation of previous data. We really don’t know, however, how lotteries behave at these levels. So there’s a solid possibility that my estimate is off.
This type of situation can easily arise in a risk management context. A long time ago, I worked with bank examiners evaluating VaR model implementations at large banks. At one of these assignments, the bank, to speed up the VaR calculation, used a pre-calculated set of option prices based on a volatility grid with a pre-specified number of points on the grid. When it was necessary to find a volatility that was outside the range of points, they extrapolated – using a flat extrapolation (i.e., any volatility outside the grid would have the same value as the last grid point). A flat extrapolation is still an extrapolation, and thus not innocuous.
The bank could have experimented with several alternative approaches for extrapolation and for reference documented the potential range of prices that could obtain. Alternatively, the bank could have modified the grid so that the range of volatilities that were allowed for widened or narrowed based on the volatility-of-volatility.
In any case, the bank should have created a diagnostic report identifying the number and nature of options that were priced using extrapolated values. Diagnostic reporting is an effective way provide a feedback loop for gauging the accuracy of perceptions that have been built into model framework.
In Sum
The world changes and assumptions about how the world works that are valid at one time may not be valid at others.
Our natural tendency to rush to judgment and our well-documented cognitive biases, such as the recency and availability biases, are powerful influences on our perceptions that can lead to misperceptions in a changing risk environment. It is essential to implement processes that help us maintain awareness of how our perceptions influence the way we have approached risk management problems. Better risk management decisions follow better awareness of these influences.
0 notes
belranto · 8 years
Text
BIS Issues ‘Progress Report’ on Risk Data Aggregation and Reporting
This “progress report", issued December 15, 2015, seems to have been motivated primarily by a desire to publicize the 1/1/2016 compliance deadline for G-SIBs, inasmuch as it contains no data beyond that which was collected in the 2014 survey of bank progress.
The list of Principles issued by the BIS can be found in the Appendix to the report. My comments are below.
Call me a curmudgeon for pointing this out, especially as compliance is not a matter of choice, but the rationale for adoption of these principles presented by the BIS in the paper is based solely on qualitative assertions about the net benefits.
BIS notes, based on its stale 2014 data, 14 firms have self-reported to be unlikely to meet the deadline.
BIS highlights a few areas of concern (Sections 3, 5):
1.   “Significant gaps in terms of data accuracy and adaptability”
2.   “concern remains about the use of manual processes”
3.   “challenges associated with the documentation of processes” needed to “develop data dictionaries”
4.   “ability to adapt data processes, particularly for ad hoc requests, is persistently weak”
5.   failure of banks to appreciate the interdependencies among the Principles (especially in assessing their compliance)
The recommendations in the report (Section 6) are quite general and high level. These include:
1.   “Continue to promote understanding of the principles”
2.   “clearly articulate risk data aggregation and reporting expectations, in line with their risk appetite”
3.   “have governance arrangements in place for manual processes”
4.   “consider reducing the complexity of their systems to aggregate risks”
5.   Conduct an independent (internal or external audit) review of compliance “in early 2016”
6.   “In cases of noncompliance…provide a remedial plan that is agreeable to supervisors”
Item 4 above (recommendation 6.5 in the paper) seems a bit off-handed and casual for something that might not be feasible within any reasonable length of time and could cost in the tens of millions of dollars (maybe hundreds of millions?).
I have one observation on the Principles themselves. I believe they are missing an important element. I think the BIS blundered by not including among the Principles one that requires regular gap, or exception, reports as part of a complete risk reporting structure. Letting management know what you can’t measure or can’t measure accurately allows management to subjectively hair-cut risk numbers when making decisions, identify concentrations of complex-to-capture risks, and prioritize resources for corrective action, if appropriate. Gap reports also provide some transparency into subjective assessments of materiality.
0 notes
belranto · 8 years
Text
There is an element of reflexivity in others’ perceptions of us
"It is an undeniable privilege of every man to prove himself in the right in the thesis that the world is his enemy; for if he reiterates it frequently enough and makes it the background of his conduct, he is bound eventually to be right."
X (George F. Keenan), "The Sources of Soviet Conduct," Foreign Affairs Vol. 25, No. 4 (Jul., 1947), pp. 566-582. Online here
In his book, Perception and Misperception in Interational Politics (1976), Robert Jervis, quoting Keenan wrote, "If for 'enemy' we read 'friend' in this statement, the proposition seems to be equally true but much less believed." (p. 82-3)
0 notes
belranto · 8 years
Text
Nitpicking the OED definition of ‘Penny Stock’
In researching my earlier post (http://bit.ly/1N47QO4) on the origin of 'Penny Stock' I consulted the OED. What I found there was unsatisfying, not only because it contained no information on the origin of 'Penny Stock', but also because the defintion itself has some issues.
The OED provides the following definition: "A common stock valued at less than one dollar, and therefore highly speculative."
The definition of penny stock contains both a factual flaw and a logical flaw.
As to the former, the US SEC officially defines a penny stock as a stock valued at less than five dollars. "Penny stock" may often be used in speech to refer to a stock with a value than a dollar, but then this should be stated as a colloquial use, else the official definition should be applied.
As to the latter, there is no cause/effect law in the relationship between the price and degree of speculativeness, such that, ipso facto, a stock with a price less than five dollars is "highly" speculative. Therefore, the "therefore" claim is not valid. It would be more accurate IMO to state merely that penny stocks are generally viewed as speculative, unless OED thinks that "highly speculative" is a defining characteristic and is mensurably distinct from merely "speculative."
IMO the use of "highly" in the definition is highly vague (http://plato.stanford.edu/entries/vagueness/) as to the degree of speculativeness entailed. What is going on with the insertion of "highly"? Esp. as OED already defines speculative investments as "involving a high risk of loss". I can conceive of two possibilities. (I discard the possiblilty that OED just goofed up on this.)
Firstly, perhaps OED intends "highly speculative" as a statement of an identity, i.e. all penny stocks are highly speculative AND all highly speculative stocks are penny stocks. I am not sure OED would want to make that claim, however (because it is obvious, on its surface, that the second part of the claim is false).
Secondly, OED perhaps believes that "highly speculative" represents a subclass of "speculative" stocks that is important to distinguish in the definition of penny stock. OK, I don't agree, as implied in my suggestion above that OED just drop the word "highly". If OED wishes to stick to its guns on "highly" then it must provide some disambiguation in order to tie down the definition. OED should tell us when a stock transcends speculative to attain membership in the highly speculative class. We can't leave that to subjective judgment, else we have a Humpty Dumpty definition ("when I use a word, it means exactly what I want it to mean"). However, here I am at a loss. I can't think of any metric that would compel general assent about the distinction between a stock that is "speculative" and one that is "highly speculative." I leave it to OED transform "highly speculative" into something mensurable - or drop "highly" from the definition.
0 notes
belranto · 8 years
Text
What you think was the origin of the term 'penny stock' isn't worth a plug nickel
This is the improbable story of how the shear talent of a man named Penny perhaps led to a new classification of equity securities.
Definitions of “Penny Stock” are a dime a dozen on the internet, but references to the early usage of the term are rarer than a 1794 ‘flowing hair” US silver dollar.
In 1816 the residents of Brooklyn, NY were at loose ends. The hamlet was without a professional barbershop, “which, in all parts of the civilized world, is considered essential to…a properly organized community.” Mr. J. Burke, barber, had departed Brooklyn to practice his “tonsorial genius” on a bigger stage, namely Manhattan. The local longhairs recited paternosters and asked for deliverance from this sorry state of unkemptness. More practical-minded former patrons of Mr. Burke formed a company and “made up a subscription for [constructing] a barber’s shop.”
The subscription price was $5 per share, “a price which no man will deem wrong,” according to a poem by Alden Spooner, written upon the occasion. [Five dollars per share is the definition used today by the US Securities and Exchange Commission as the price below which a stock is called a penny stock.]
Having combed the vicinity for a candidate, the residents settled on “a consequential little man named [Edward] Penny, and the stock subscribed was known as ‘The Penny Stock;’ which, in 1819, Mr. Geo. S. Wise offered, by advertisement, to buy up at its original price.”
Mr. Wise was purser at the Brooklyn navy yard, and active in civic affairs. He seems to have placed great stock in hair, thinking it no folly to follow the follicle.
NOTES:
Facts cited above are from “History of the City of Brooklyn, N.Y.,” Henry R. Stiles (1869) V. 2, Ch. 1, 15-16, and "The Long Island Star", issues of 20 and 27 March, 1816
Likely derived from the same source are two articles mentioning penny stock printed in the Brooklyn Daily Eagle. “Former City Elections”, Brooklyn Daily Eagle, 31 October 1897, page 20, “Barber Shop Set up After 1816 fund Drive”, Brooklyn Daily Eagle, 9 June 1946, page 46.
George S. Wise, Jr. arrived in Brooklyn in 1812 from Virginia, according to American State Papers, page 922, upon his appointment April 25 as “purser in the navy yard”. He was active in civic projects. He was still living in Brooklyn in 1817, according to an entry in “Virginia Ancestors and Adventurers, volumes 1-3, page 58. http://www.genealogy.com/forum/surnames/topics/wise/428/ http://bit.ly/1I3op0N http://bit.ly/1NIFg5C
Flowing Hair silver dollar: http://bit.ly/1OYf8tl
The song, of which a line was quoted above, was printed in the "Long Island Star" paper of 27 March 1816, p.3, the song being dated 24 March.
The cheeky preamble of the Constitution of the Penny Association, signed by “The President and Directors of the Penny Stock of the village of Brooklyn,” and dated 16 March, was printed in the The Long Island Star of 20 March 2016, p.3.
1 note · View note
belranto · 9 years
Text
Rules of Inference? We don’t need no stinkin’ rules!
On September 10, 2015, Equity Research firm Canaccord published in their Morning Summary report their short term view on the US equity markets. One of the questions the report asked, then answered, was "how meaningful is the post retest rally?" The answer the report gave is "BIG. The three months following a post-crash retest show gains of at least 16%." The evidence in support of this prediction, confidently offered with remarkable certainty and precision, was based on 3 prior instances. Honestly, that is scant evidence in support of such a firm conclusion. Why, 300 years, after the principles of statistical inference were first established, do “sophisticated” analysts still make predictions in utmost ignorance of and contrary to those principles. To wit, see the following statements by Jacob Bernoulli, published in 1713, “For it should be presumed that each [event] will occur or not occur in the future as many times as it has in the been observed, in a similar state of affairs, to have occurred or not occurred in the past. “Then, this too can escape no one, that to make a judgement in this manner on any event it is not sufficient to have taken one or two experiments, but one needs a large number of experiments. For the most stupid of men, I know not by what instinct of nature, by himself and without instruction (a remarkable thing to be sure), is convinced that the more observations have been made, the less danger there is of wandering from one’s aim. (Ars Conjectandi, pp. 224-227, Trans. By C. J. Lewis, quoted from The Life and Times of the Central Limit Theorem by William J. Adams, 1974) (link to O. Sheynin translation of Bernoulli is here: http://www.sheynin.de/download/bernoulli.pdf)
0 notes
belranto · 9 years
Text
It bugs me when someone says, “After six years of rising share prices, I’m uncomfortable with current valuations”
It bugs me (just a little bit) when I read statements like the following:
“After six years of rising share prices in the United States, I start to feel a little uncomfortable with current valuations.”
I have taken this from a recent post on the SeekingAlpha website, but I won’t provide a link in order to protect (unless you Google it, I suppose) the perpetrator.
I expect this sort of thing to be followed by some sort of statement about “the law of averages”, which is not a law of any kind.
Simply put, the number of consecutive years that the stock market is up provides NO INFORMATION in itself about the likelihood it will be up in the next year.
Stock prices don’t go down simply and exclusively because there has been a run of positive returns. There may be many reasons why, at the current date, an investor might be concerned about a possible negative year for stocks. Those reasons could be related to the economy, various monetary authority actions, the state of global geopolitics, the weather, pandemics, or even merely self-interested human behavior that could generate complex macro dynamics.
Let’s ignore all the reasons why the market might be up or down in a given year, and just look at the annual returns on the S&P 500 since 1928 with no real-world context (you can get those returns here http://bit.ly/1LtY1fm).
In the 87 years beginning with 1928, we have had 46 instances of pairs of consecutive positive return years for the S&P 500. Any given year may appear as both the first year of a pair and the second year of another pair. So there are a total of 86 possible instances of pairs of consecutive positive return years (1928 cannot be the second year of a pair (well, it could, but the data only starts in 1928), and 2014 cannot be the first year of a pair, only the second (that is, it cannot be so until we finish 2015).
OK, then, 46 instances out of a maximum possible 86 pairs is a bit more than 50% of the time an up year is followed by another up year.
Next let’s see what happens if we look at all cases since 1928 in which we can identify 6 consecutive years with positive returns, we find 9 such cases, and in 5 of those cases the next year had a positive return. That is, if we have seen (or conditional on) 6 consecutive positive return years, there is a bit greater than a 50% chance the next year will be an up year.
Hey, that’s what we saw when we just asked how often is a single up year followed by another! What that suggests (not proves) is that there’s no difference between 1 or 6 consecutive up years in terms of the likelihood of the next year being an up year. Either way, the chance is about the same (a slightly favorably biased coin flip).
0 notes
belranto · 9 years
Video
youtube
Thoughts on buy-side risk management, changes wrought by regulation, and what makes a good buy-side risk manager.
0 notes
belranto · 9 years
Text
A definition of “emerging risk” should explicitly refer to scenarios and causal relationships
PROVISIONAL DEFINITION of Emerging Risk: An out-of-consensus, value-impacting scenario for which some suspected precursors of its (possible) cause are evident.
This definition makes explicit two facets of emerging risks that are not stated in other definitions. First, an emerging risk is a scenario (or a set of related scenarios). Second, an emerging risk is defined by a causal relationship.
The following claims follow from this definition:
An emerging risk cannot be a ‘black swan,’ as a black swan by definition cannot be anticipated, and therefore no cause or scenario can be postulated in advance of the event, and certainly no potential precursors can claimed to be evident.
An emerging risk does not have to be something never seen before, as some definitions suggest. All that is required is that the potential scenario is out-of-consensus and that there is limited clarity about the potential catalysts and channels of causation.
A possible future event that would result from a phase transition in a dynamic (complex) system exhibiting self-organized criticality cannot be an emerging risk. Phase transitions in such systems are inherently impervious to forecasting and causal identification.
Example of an emerging risk:
A significantly faster than expected interest rate increase cycle following upside surprise in US domestic price inflation led by food price inflation triggered by the California drought and responses by western states’ governments to protect their water supplies.
0 notes
belranto · 9 years
Text
Emerging risk is not risk in the usual sense, and that matters for how we should discuss, evaluate and respond to it
It is common to use the word “risk” in the definition of emerging risk. That seems potentially circular to me, and wrong.
Take for example this definition of emerging risks from Swiss Re: “We define emerging risks as newly developing or changing risks which are difficult to quantify and which may have a major impact on an organization.” http://bit.ly/1IaV3aw
Why is this use of "risk" in the definition of "emerging risk" OK? Perhaps “emerging” is an adjective describing a type of risk. If so, then a definition of emerging risk that includes “risk” in its own definition is not self-referential. But does this interpretation make sense?
If emerging risks are a type of risk, then they represent a special case, just like a “green door” is a special case of a “door” and influenza-caused illness is a special case of “illness”. Everything that we know about interacting with green doors or influenza is a subset of everything we know about interacting with doors or illnesses. In the sense of “emerging” as a category of risks, such risks are special cases of risk, but they are not special.
But what if “emerging risk” is special, i.e., a thing different from “risk?” If it is special, then managing emerging risk requires something that is not part of our understanding of how to manage risk. I think emerging risks are special.
The validity of a claim to specialness in this case depends on the agreed definition of “risk.” Suppose “risk” is understood in Knightian terms, i.e., “a measurable uncertainty” (Risk, Uncertainty, and Profit, Frank Knight, 1921, p. 19).
Then if an essential characteristic of “emerging risk” is that neither the severity nor the probability of an emerging risk can be readily estimated, it follows that "emerging risk" is not a type of risk. It lies outside the scope of the Knightian definition.
It is worth repeating, that in a Knightian view emerging risk is not risk. Knight would have concluded instead that emerging risk is a type of uncertainty.
Why care about whether “emerging risk” is risk? One reason I care is that I want to engage people in productive conversations about emerging risks, and the people who I want to engage must adopt different evaluation and decision criteria for emerging risk discussions than for discussions about Knightian risks, because the risk management toolbox we commonly employ is for Knightian risks, and that is of no help here.
The word “risk” triggers a particular framework in the thinking of people with whom we are communicating. This framework includes expectations about the nature of the conversations that will take place, the kind of evidence that will be brought to bear on those conversations, and a particular way of processing that evidence for decision making.
Successful engagement in a dialogue about emerging risk will be complicated by using a term like “risk” if that word creates an unreceptive mindset for the kind fuzzy data that is inherent in and unique to the emerging risk discussion.
Emerging risk is not risk in the usual sense and that needs to be made clear as a first step to managing emerging risk.
0 notes
belranto · 9 years
Text
You can argue with success when it’s based on stupidity
In John Scalzi’s Hugo Award winning novel from 2012, Redshirts (a fun twist on the space opera genre and an extended hortatoration (I think I made that word up) on living a purposeful life), we find the book’s hero, Andrew Dahl, after escaping a particularly harrowing situation, reviewing with his friend Finn how they succeeded. Finn says, “In retrospect, the plan has significant logistical issues…On the other hand, it worked. You can’t argue with success.” Dahl replies, “Sure you can when it’s based on stupidity.” (p. 128).
Finn is arguing that the salutary outcome proved the theory was correct. Dahl was arguing that the theory need not be accepted, just based on succeeding events, if the premises are implausible.
More basically, Dahl is arguing that induction is a fault-prone method for evaluating a theory. We earth-bound humans use induction all the time because induction is generally quite useful. But not always.
A cliché among portfolio managers is that it is better to be lucky than right. For a single realization of an uncertain outcome, it is true that the satisfaction from a trade that turns out well is greater than the satisfaction from learning that the trade thesis was correct, irrespective of whether the trade makes money. However, that cliché doesn’t lead managers to employ a trading strategy (i.e., a set of decision rules) that puts complete reliance on chance (e.g., throw dart at stock pages of newspaper, buy stock if dart lands on a company in which the third letter of its name is among the first half of the alphabet, else sell stock short).
We need to understand causes. It is encoded in our genes. Just as induction is so encoded.
The problem that arises is that induction sometimes leads us astray from true understanding. We don’t test causal hypotheses enough, nor search thoroughly enough for alternative explanations. It is a commonplace in detective novels that the “obvious” cause of a crime does not turn out to be the actual cause. That very aspect of the story that we enjoy in a good murder mystery is the type of thinking we don’t apply in our everyday inquiries into causes. We tend to accept overly simplistic explanations for inherently complex causal situations.
On April 21, 2015, Singh Sarao, was arrested in London as the US Justice Department had determined that “His conduct was at least significantly responsible for the order imbalance that in turn was one of the conditions that led to the flash crash [of May 2010].” See, http://reut.rs/1PFtcsq.
The flash crash occurred, not because of the actions of any one individual, but rather because of the complex structure of the network of trading relationships that exists in equity markets. Even if Sarao’s trading were the straw that broke the camel’s back, was his trading the cause of the flash crash? If Sarao had shown up for work 10 minutes later that day, would some other spoofer then have been in the headlines in April as the cause of the flash crash?
As seekers of causes, we are unwilling to accept the role that pure chance may play. Stanislaw Lem’s novel, “The Chain of Chance”, http://reut.rs/1PFtcsq, plays with what most think of as the bane of good writing, plot driven by chance. In this detective story, randomness is the perpetrator. And the reader’s unwillingness to consider that possibility is what motivates the suspense throughout the book.
At a later point in “Redshirts”, Finn takes a Dahl-like stance when another colleague, Hester says, “if the shoe fits, I’m going to call it a shoe….” Finn replies, “…what you’re trying to do is impose causality on random events.” (pp. 110-111).
Assigning causes may help us emotionally to feel in control of our environment, at least through a belief that we understand it's causal mechanisms. That positive feeling may exist even if the purported causes are miss-assigned. But miss-assigning causes is likely to lead us to make future decisions that are not salutary, because our efforts to control the environment are then focused wrongly on ameliorating the effects arising from a cause that is not a cause.
1 note · View note
belranto · 9 years
Text
Systemic Risk Control: Is it Possible to Have Too Much?
In 1948 Poul Anderson, a giant of Science Fiction writing, published his first story, “Genius”. It is a story about the mis-management of systemic risk.
In its particulars it is about an intergalactic empire (of course), the Solarians, committed to the maintenance of stability as the only path for long-term survival. A potential threat from within has come to the attention of the imperial government. Something needs to be done about it.
The story isn’t a “space opera.” More in line with Gallileo’s “Dialogue Concerning the Two Chief World Systems,” the story is, essentially, a dialogue on the political philosophy of stabilizationism (I made up that word).
The protagonists are Heym, a psychologist arguing against the single-minded goal of stability, and Goram, the head of the imperial armed forces (the regulator, if you will), who would stamp out any “disruptive factor” as “a danger that can wreck civilization.”
Heym avers, “Progress is disruption.” To which, Goram responds, “the Empire is based on stasis. It’s sacrificed progress for – survival.” “Even stasis isn’t safe,” is the comeback from Heym.
Achieving a crisis-free equilibrium may be quite painful, and maintaining it might impossible except at great cost. Heym says, the “purpose of ultimate satisfied stasis cannot be realized in a basically dynamic cosmos.” He continues, “How reconcile and coordinate a hundred thousand stars except by absolutism, by deliberate statism?”
For Heym, systemic risk and progress must coexist to achieve dynamic stability over time.
“Genius” has a surprise ending, in which the disruptive forces of so much concern to Goram are revealed to be already beyond the Empire’s ability to exterminate them. "Genius" suggests the inevitability of the failure of the public policy goal of stabilizationism.
“Genius” is a good story. And as a metaphorical tale about the hazardous side of systemic risk regulation, it is thought provoking.
The story was published in Astounding magazine and subsequently reprinted in The Best Science Fiction Stories 1949, currently out of print.
A short radio drama version of the story, first aired in 1958, unfortunately shorn of much of the political philosophy, can be found online, here: http://bit.ly/1Kgduxt
0 notes
belranto · 9 years
Text
Uncertain about Risk
In Martin Davies excellent post (http://bit.ly/1GVY95j) on lessons for risk management from the Germanwings crash, he points to the Risk Management standard, ISO 31000, quoting therefrom as follows:
"Risk is the effect of uncertainty on objectives."
This is a very clever definition of risk, but it is also a bit troubling.
Based on this definition, unless there is an objective, there is no risk, only uncertainty.
For example, for ISO tomorrow’s weather is uncertain, but not risky unless I articulate some objective with respect to which the weather tomorrow may impinge, like a desire to take a stroll in the park.
This definition implies the existence of risk with respect to an event is a subjective matter. It is subjective because risk exists only for those who define an objective that is impinged upon by the particular uncertainty. It is also subjective because for a given individual, uncertainty with respect to an event may represent a risk at one time but not at another.
Risk, then, following ISO, in philosophical terms, is not intersubjective (http://en.wikipedia.org/wiki/Intersubjectivity). I have previously argued that risk models are not intersubjective, but the implication of ISO is more fundamental than that. At any time no two people (or group of people) may agree on the existence of a particular risk.
The ISO risk definition does not associate risk with measurability. It thus discards Frank Knight’s (http://en.wikipedia.org/wiki/Frank_Knight) definition of risk as future events belonging to the subset of uncertainties that are measurable, i.e., where the distribution of outcomes is known (or, generously, estimable).
For Frank Knight “black swans” are not risks, for ISO they are risks, but only if they impact objectives. For Knight, risk and uncertainty demand different approaches to determining action. For Knight, risks are addressable by objective means, and general assent as to the assessment of risk is possible. For ISO, I am not sure, but I think no fundamental difference is contemplated, and the subjectivity of risk means that general assent as to pretty much any risk is problematic or at least fragile.
But the very generality of the ISO definition is a source of unease for me. Let’s for simplicity adopt an anthropogenic perspective. Ludwig von Mises (http://en.wikipedia.org/wiki/Human_Action) argued (forcefully) that human action is willful. Every human action has an objective. But if all human action is willful, then can't I pretty much be guaranteed to find some risk with respect to any action? Heck, I can probably find some number of risks with respect to any action.
If every action of every human has a multiplicity of (subjective) risks that impinge on the (subjective) objective (sorry about that!), then in what way is the ISO definition of risk helpful to me at all?
For Knight, the definition of risk acts like the first stage of a decision tree. It is a useful way of bifurcating uncertain events into those that should be subject to one or another form of analysis (statistical or judgmental). For ISO, the definition of risk encompasses every human action, and therefore suggests nothing in itself that either economizes decision effort or instructs as to what to do next.
0 notes
belranto · 9 years
Text
OFR critique of Fed stress tests rests on questionable claims
On March 3, just a couple of days before the latest results of US bank stress tests were released, the Office of Financial Research, an organization within the US Treasury Department, published a paper with the somewhat provocative title, Are the Federal Reserve’s Stress Test Results Predictable?
The authors assert that the stress test results are highly predictable, and that predictablility is a bad thing.
The paper’s substance rests on three questionable claims.
The first claim is that the Fed stress test results could be discerned by anyone just from knowledge of specifications of the Fed’s stress scenarios and from prior periods’ stress test results.
The evidence for the claim is provided by empirical analysis of the results of previous stress scenario exercises. Some impressively high correlations are presented. Of these, some may provide redundant information. Some are based on strikingly little data. In one case the authors say the data “suggest increasing predictability over time” based on a comparison of two stress test exercises. The authors did not choose to apply their empirical estimates to predict the 2015 stress test results from the available data, unfortunately.
The second claim builds on the first. The authors claim that because bank stress tests results are predictable they are uninformative. This claim is based on a misapplication of information theory. In information theory a message contains no information if the content is perfectly predictable. The authors are asserting that at the time the Fed released the details of the stress scenarios to be used by banks in conducting their testing, the public could immediately and without material error have anticipated the results of the stress test exercise.
The third claim of the paper relies on acceptance of the first two. It is asserted that because the release of the results is uninformative, the stress tests themselves are ineffective. That is, it doesn’t matter what the results are, i.e., what the stressed capital ratios are, predictability is sufficient for the stress test exercise to be ineffective.
Despite the headline-grabbing title and the publicity-generating timing of the paper’s release, the OFR paper doesn’t provide compelling arguments in support of its claims.
The OFR paper’s arguments are not necessary to identify potential problems inherent to the Fed stress testing exercise.
For example, requiring banks to go through this exercise induces adaptive behavior by those banks with the goal of obtaining better stress test results. However, that same behavior may lead to less robustness to other possible scenarios that differ from the scenarios used in the exercise.
That same adaptive behavior may also drive all banks to follow similar business strategies, thereby reducing diversity in the system, diversity which otherwise might buffer the impact of systemic events.
Additionally, the information shared publicly by releasing stress results may affect banks’ behavior toward each other in ways that may lead to less risk sharing and greater concentration of risk in a few banks.
0 notes