Here at Metricon 2.0, many interesting talks, as expected.
For example, Russell Cameron Thomas of Meritology mentioned the difference between puzzle thinking (looking only under the light you know) and mystery thinking (shining a light into unknown areas to see what else is out there). Seems to me most of traditional security is puzzle thinking. Other speakers and questioners said things in other talks like "that's a business question that we can't control" (literally throwing up hands); we can only measure where "we can intervene"; "we don't have enough information" to form an opinion, etc. That's all puzzle thinking.
Which is unfortunate, given that measuring only what you know makes measurements hard to relate to business needs, hard to apply to new, previously unknown problems, and very hard to use to deal with problems you cannot fix.
Let me hasten to add that Thomas's talk, entitled "Security Meta Metrics—Measuring Agility, Learning, and Unintended Consequence", went beyond these puzzle difficulties and into mysteries such as uncertainty and mitigation.
Not only that, but his approach of an inner operational loop (puzzle) tuned by an outer research loop (mystery) is strongly reminiscent of John R. Boyd's OODA loop. Thomas does not appear to have been aware of Boyd, which maybe is evidence that by reinventing much the same process description Thomas has validated that Boyd was onto something.
-jsq
Thanks for pointing me to John Boyd's work and OODA. I actually came across that model years ago but lost the pointer. Similar models have been developed in other fields, such as cognitive science and quality management ("Plan-do-check-act"). What's great about Boyd is that he has put this in both a strategic and tactical framework in the context of conflict or competition. Not enough security managers and executives think this way.
As a rather glaring example, consider the congressional testimony of the DHS CIO Scott Charbo (press converage: http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9025420&pageNumber=1; testimony web cast: http://boss.streamos.com/wmedia/homeland/chs/cyberjune.wvx). From the CW article: "Other committee members grilled Charbo on his awareness of previous computer intrusions at other federal agencies by Chinese hackers, and asked him why he had failed to solicit detailed information on the attacks from US-CERT and intelligence agencies." It was clear that DHS did not have systematic organization learning activities or metrics. If I recall the web cast correctly, Charbo admitted "We don't know what we don't know".
Sadly, they are not alone. The same could probably be said for most organizations, and it provides a prima facie case for organization learning in information security management.
Posted by: Russell C. Thomas | August 07, 2007 at 02:23 PM
The "puzzles vs. mysteries" idea came from this article: http://www.smithsonianmagazine.com/issues/2007/june/presence-puzzle.php
My presetation "Security Meta Metrics" will be posted here: http://meritology.com/resources/index.htm
Posted by: Russell C. Thomas | August 07, 2007 at 03:09 PM
Personally, I think the Chinese should be embarassed at going after such a soft target as the U.S. government:
http://www.theregister.co.uk/2007/08/06/govt_password_security/
Social engineering worked for Kevin Mitnick,
http://www.amazon.com/Art-Deception-Controlling-Element-Security/dp/076454280X/ref=pd_bbs_sr_1/105-1111965-9856406?ie=UTF8&s=books&qid=1186517897&sr=8-1
and it still works for anybody who wants to attack the U.S. government; no actual technical skills required.
Meanwhile, DHS mandated a single operating system, so who would expect such a department to be concerned about security?
http://riskman.typepad.com/perilocity/2006/08/dhs_microsoft_a.html
-jsq
Posted by: John S. Quarterman | August 07, 2007 at 04:21 PM