Crypto-gram for April 15, 2001. In this issue: Natural Advantages of Defense: What Military History Can Teach Network Security, Part 1, A Correction: nCipher, CSI's Computer Crime and Security Survey, Crypto-Gram Reprints, and Fake Microsoft Certificates.
341b3529b2ea2c8c9a00ad34655943b05387e5d7056707073869ca80e4b44d0c
CRYPTO-GRAM
April 15, 2001
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@counterpane.com
<https://www.counterpane.com>
A free monthly newsletter providing summaries, analyses, insights, and
commentaries on computer security and cryptography.
Back issues are available at
<https://www.counterpane.com/crypto-gram.html>. To subscribe or
unsubscribe, see below.
Copyright (c) 2001 by Counterpane Internet Security, Inc.
** *** ***** ******* *********** *************
In this issue:
Natural Advantages of Defense: What Military History
Can Teach Network Security, Part 1
A Correction: nCipher
CSI's Computer Crime and Security Survey
Crypto-Gram Reprints
News
Counterpane Internet Security News
Fake Microsoft Certificates
Comments from Readers
** *** ***** ******* *********** *************
Natural Advantages of Defense:
What Military History Can Teach Network Security, Part 1
Military strategists call it "the position of the interior." The defender
has to defend against every possible attack. The attacker, on the other
hand, only has to choose one attack, and he can concentrate his forces on
that one attack. This puts the attacker at a natural advantage.
Despite this, in almost every sort of warfare the attacker is at a
disadvantage. More people are required to attack a city (or castle, or
house, or foxhole) than are required to defend it. The ratios change over
history -- the defense's enormous advantage in WW I trench warfare lessened
with the advent of the WW II blitzkrieg, for example -- but the basic truth
remains: all other things being equal, the military defender has a
considerable advantage over the attacker.
This has never been true on the Internet. There, the attacker has an
advantage. He can choose when and how to attack. He knows what particular
products the defender is using (or even if he doesn't, it usually is one of
a small handful of possibilities). The defender is forced to constantly
upgrade his system to eliminate new vulnerabilities and watch every
possible attack, and he can still get whacked when an attacker tries
something new...or exploits a new weakness that can't easily be
patched. The position of the interior is a difficult position indeed.
A student of military history might be tempted to look at the Internet and
wonder: "What is it about warfare in the real world that aids the defender,
and can it apply to network security?"
Good question.
The defender's military advantage comes from two broad strengths: the
ability to quickly react to an attack, and the ability to control the
terrain.
The first strength is probably the most important; a defender can more
quickly shift forces to resupply existing forces, shore up defense where it
is needed, and counterattack. I've written extensively about how this
applies to computer security: how detection and response are critical, the
need for trained experts to quickly analyze and react to attacks, and the
importance of vigilance. I've built Counterpane Internet Security's
Managed Security Monitoring service around these very principles, precisely
because it can dramatically shift the balance from attacker to defender.
The defender's second strength also gives him a strong advantage. He has
better knowledge of the terrain: where the good hiding places are, where
the mountain passes are, how to sneak through the caves. This provides the
defender with an enormous advantage. He can modify the terrain: building
castles or surface-to-air missile batteries, digging trenches or tunnels,
erecting guard towers or pillboxes. And he can choose the terrain on which
to stand and defend: behind the stone wall, atop the hill, on the far side
of the bridge, in the dense jungle. The defender can use terrain to his
maximum advantage; the attacker is stuck with whatever terrain he is forced
to traverse.
On the Internet, this second advantage is one that network defenders seldom
take advantage of: knowledge of the network. The network administrator
knows exactly how his network is built (or, at least, he should), what it
is supposed to do, and how it is supposed to do it. Any attacker except a
knowledgeable insider has no choice but to stumble around, trying this and
that, trying to figure out what's where and who's connected to whom. And
it's about time we exploited this advantage.
Think about burglar alarms. The reason they work is that the attacker
doesn't know they're there. He might successfully bypass a door lock, or
sneak in through a second-story window, but he doesn't know that there is a
pressure plate under this particular rug, or an electric eye across this
particular doorway. McGyver-like antics aside, any burglar wandering
through a well-alarmed building is guaranteed to trip something sooner or
later.
Traditional computer security has been static: install a firewall,
configure a PKI, add access-control measures, and you're done. Real
security is dynamic. The defense has to be continuously vigilant, always
ready for the attack. The defense has to be able to detect attacks
quickly, before serious damage is done. And the defense has to be able to
respond to attacks effectively, repelling the attacker and restoring order.
This kind of defense is possible in computer networks. It starts with
effective sensors: firewalls, well-audited servers and routers,
intrusion-detection products, network burglar alarms. But it also includes
people: trained security experts that can quickly separate the false alarms
from the real attacks, and who know how to respond. This is security
through process. This is security that recognizes that human intelligence
is vital for a strong defense, and that automatic software programs just
don't cut it.
It's a military axiom that eventually a determined attacker can defeat any
static defense. In World War II, the British flew out to engage the
Luftwaffe, in contrast to the French who waited to meet the Wehrmacht at
the Maginot Line. The ability to react quickly to an attack, and intimate
knowledge of the terrain: these are the advantages the position of the
interior brings. A good general knows how to take advantage of them, and
they're what we need to leverage effectively for computer security.
The importance of detection and response in network security:
<https://www.counterpane.com/crypto-gram-0005.html#ComputerSecurityWillWeEver
Learn>
Marcus Ranum has written and spoken about Internet burglar alarms.
<https://web.ranum.com/pubs/pdf/burglar-alarms.pdf>
** *** ***** ******* *********** *************
A Correction: nCipher
In the Crypto-Gram of January 2000, I wrote about a security vulnerability
publicized by nCipher. I called this a publicity attack, meaning an attack
more designed to call publicity to the discoverer than the vulnerability
itself. In any case, my write-up contained a factual error: I claimed that
nCipher distributed a tool that exploited this error, while in fact they
did not. (I remember reading this somewhere, but I cannot remember
where. And searching the various news archives, I can't find anyone else
who states this "fact.")
nCipher was not pleased by this error.
In the February Crypto-Gram I published a letter from nCipher correcting
this error. I thought this was the end of it, and I moved on to other
things. Unfortunately, in the January 2001 Crypto-Gram I published the URL
to the January 2000 article. (This is what I do in the "Crypto-Gram
Reprints" section.) This whole escapade never entered my mind; I didn't
think about the history of the article.
Unfortunately, people read (or reread) the article without seeing the
correction. And some of them contacted nCipher to complain. And nCipher
(understandably) got all pissed off at me once again.
So I am trying to correct the record. nCipher claims that they did not
release a tool that exploits the vulnerability, and I believe them. They
are a responsible company, and despite our disagreements as to the
motivations of their year-old discovery, they are an honorable and
upstanding company in the security community.
For the record, nCipher has not threatened me or Counterpane with legal
action. (I consider this admirable, which is why I mention it.) They
contacted me personally. They have a valid complaint, and I
apologize. All you guys, stop bugging them. And I hope that come next
January, I don't forget about this once again when it comes time to write
the "Crypto-Gram Reprints" section.
The original essay:
<https://www.counterpane.com/crypto-gram-0001.html#KeyFindingAttacksandPublic
ityAttacks>
nCipher's letter:
<https://www.counterpane.com/crypto-gram-0002.html#CommentsfromReaders>
** *** ***** ******* *********** *************
CSI's Computer Crime and Security Survey
For the past six years, the Computer Security Institute has conducted an
annual computer crime survey. The results are not statistically meaningful
by any stretch of the imagination -- they're based on about 500 survey
responses each year -- but it is the most interesting data on real-world
computer and network security that we have. And the numbers tell a
coherent story. (I'm just going to talk about the 2001 numbers, but the
numbers for previous years track pretty well.)
64% of respondents reported "unauthorized use of computer systems" in the
last year. 25% said that they had no such unauthorized uses, and 11% said
that they didn't know. (I believe that those who reported no intrusion
actually don't know.) The number of incidents was all over the map, and
the number of insider versus outsider incidents was roughly equal. 70% of
respondents report their Internet connection as a frequent point of attack
(this has been steadily rising over the six years), 18% report remote
dial-in as a frequent point of attack (this has been declining), and 31%
report internal systems as a frequent point of attack (also declining).
The types of attack range from telcom fraud to laptop theft to
sabotage. 40% experienced a system penetration, 36% a denial of service
attack. 26% reported theft of proprietary information, and 12% financial
fraud. 18% reported sabotage. 23% had their Web sites hacked (another 27%
didn't know), and over half of those had their Web sites hacked ten or more
times. (90% of the Web site hacks were just vandalism, but 13% included
theft of transaction information.)
What's interesting is that all of these attacks occurred despite the wide
deployment of security technologies: 95% have firewalls, 61% an IDS, 90%
access control of some sort, 42% digital IDs, etc. Clearly the
technologies are not working sufficiently well.
The financial consequences are scary. Only 196 respondents would quantify
their losses, which totaled $378M. From under 200 companies! In one
year! This is a big deal.
More people are reporting these incidents to the police: 36% this
year. Those who didn't report were concerned about negative publicity
(90%) and competitors using the incident to their advantage (70%).
This data is not statistically rigorous, and should be viewed as suspect
for several reasons. First, it's based on the database of information
security professionals that the CSI has (3900 people), self-selected by the
14% who bothered to respond. (The people responding are probably more
knowledgeable than the average sysadmin, and the companies they work for
more aware of the threats. Certainly there are some large companies
represented here.) Second, the data is not necessarily accurate, but is
only the best recollections of the respondents. And third, most hacks
still go unnoticed; the data only represents what the respondents actually
noticed.
Even so, the trends are unnerving. It's clearly a dangerous world, and has
been for years. It's not getting better, even given the widespread
deployment of computer security technologies. And it's costing American
businesses billions, easily.
The survey (you have to give them your info, and they will send you a paper
copy):
https://www.gocsi.com/prelea_000321.htm
** *** ***** ******* *********** *************
Crypto-Gram Reprints
Microsoft Active Setup "Backdoor"
<https://www.counterpane.com/crypto-gram-0004.html#MicrosoftActiveSetup"Backd
oor">
UCITA:
<https://www.counterpane.com/crypto-gram-0004.html#TheUniformComputerInformat
ionTransactionsAct(UCITA)>
Cryptography: The Importance of Not Being Different:
<https://www.counterpane.com/crypto-gram-9904.html#different>
Threats Against Smart Cards:
<https://www.counterpane.com/crypto-gram-9904.html#smartcards>
Attacking Certificates with Computer Viruses:
<https://www.counterpane.com/crypto-gram-9904.html#certificates>
** *** ***** ******* *********** *************
News
Government warning about ways to bypass an IDS:
<https://www.nipc.gov/warnings/assessments/2001/01-004.htm>
Good articles on building secure Linux:
<https://www.rootprompt.org/article.php3?article=903>
<https://www.rootprompt.org/article.php3?article=931>
This is the FBI affidavit about Robert Hanssen that discusses the letter
the FBI decrypted:
<https://www.fas.org/irp/ops/ci/hanssen_affidavit2.html>
One of the secrets Hanssen told the Russians was that the U.S. dug a tunnel
under the Soviet embassy in Washington to eavesdrop on their
communications. From the March 19th Newsweek article on the subject:
"laser beams could pick up vibrations from the keystrokes of Soviet
ciphering machines -- helping to decode their signals." This is an example
of what I call "side-channel cryptanalysis," using information other than
the plaintext and ciphertext to cryptanalyze traffic. I have long believed
it to be the primary way cryptanalysis is done in the intelligence community.
Side Channel Cryptanalysis:
<https://www.counterpane.com/side_channel.html>
I've already written about the Eastern European hackers stealing credit
card numbers, the FBI's warning, and the fact that the thieves were using
documented and fixable security vulnerabilities. Well, the Center for
Internet Security has released a tool that scans your network to see if it
is vulnerable to the specific attacks used by these groups. On the one
hand, this is kind of dorky: all the vulnerabilities are well known, and
any scanner is likely to pick them up...and more. But on the other hand,
this is an excellent idea. The tool works, it's free, and addresses a
problem in the news. If people download and run this tool, it will help
increase their security. Kudos to the CIS for making this available.
<https://www.cisecurity.org/patchwork.html>
Businesses are losing more to Internet crime, even though they're using
more security technology.
<https://news.cnet.com/news/0-1003-200-5109411.html?tag=mn_hd>
<https://www.cnn.com/2001/TECH/internet/03/12/csi.fbi.hacking.report/index.html>
Write your own Visual Basic worm and infect the Net!
<https://www.wired.com/news/technology/0,1282,42375,00.html>
Remember kids, this is "for educational use only."
Remember Microsoft's claim that the security features added to the next
versions of Office and Windows would make it extremely difficult to
pirate? Well, the security lasted negative one months.
<https://www.wired.com/news/business/0,1367,42402,00.html>
Son of CPRM:
<https://www.computerworld.com/cwi/stories/0,1199,NAV47-68-84-88_STO58695,00.
html>
Impressive identity thefts:
<https://www.nypost.com/news/regionalnews/26868.htm>
The revised 802.11 security standard is more public:
<https://grouper.ieee.org/groups/802/11/Documents/DocumentHolder/1-18.zip>
But more flaws in its security have been found:
<https://www.zdnet.com/zdnn/stories/news/0,4586,2704419,00.html>
<www.ieee802.org/11>
Security and 802.11 wireless:
<https://www.csdmag.com/story/OEG20010323S0085>
Interesting article on the NSA:
<https://www.cnn.com/SPECIALS/2001/nsa/stories/codebreakers/index.html>
A vulnerability was found in the OpenPGP standard. If an attacker can
modify the victim's encrypted private key file, he can intercept a signed
message and then figure out the victim's signing key. (Basically, if the
attacker replaces the public key parameters with weak ones, the next
signature exposes the private key.) This is a problem with the data
format, and not with the cryptographic algorithms. I don't think it's a
major problem, since someone who can access the victim's hard drive is more
likely to simply install a keyboard sniffer. But it is a flaw, and shows
how hard it is to get everything right. Excellent cryptanalysis work here.
Announcement:
<https://www.i.cz/en/onas/tisk4.html>
News reports:
<https://www.nytimes.com/2001/03/21/technology/21CODE.html>
<https://securitygeeks.shmoo.com/article.php?story=20010320130246610#comments>
<https://www.wired.com/news/politics/0,1283,42553,00.html>
<https://news.cnet.com/news/0-1003-200-5208418.html?tag=mn_hd>
The research paper:
<https://www.i.cz/en/pdf/openPGP_attack_ENGvktr.pdf>
More dire warnings from the FBI:
<https://www.washingtonpost.com/wp-dyn/articles/A31203-2001Mar20.html>
Web bug reports. See who's using Web bugs.
<https://www.securityspace.com/s_survey/data/man.200102/webbug.html>
Microsoft Outlook virus. (Note: this is a joke.)
<https://www.satirewire.com/news/0103/outlook.shtml>
This is a clever idea to help stop cell phone theft:
<https://www.cnn.com/2001/TECH/ptech/03/28/SMS.bomb.idg/>
<https://www.security-informer.com/ic_490202_3494_1-1481.html>
Yet another Windows feature that will be riddled with
insecurities. Windows XP will have something called "shared desktop" that
will allow users to manipulate their PC over the Internet. Read the last
paragraph of this article. Does anyone think this will be secure?
<https://pcworld.idg.com.au/pcw.nsf/news/2FB516AAFE6254DACA256A1C00818D0A!Open>
Microsoft redefines the word "secure." Remember when "secure" meant "not
leaking private or secret data against the owner's wishes"? Microsoft
thinks it means "unable to duplicate copyrighted works that are nonetheless
within the Fair Use doctrine."
<https://www.theregister.co.uk/content/4/17851.html>
Another Microsoft disaster in the making: an operating system that
automatically updates itself without the user's knowledge or consent:
<https://www.theregister.co.uk/content/4/17944.html>
Trying to make a Napster-proof CD:
<https://www.salon.com/tech/inside/2001/03/27/cd_protection/print.html>
"War Driving": Driving around with an 802.11-equipped PC, looking for
unsecured networks to access:
<https://www.theregister.co.uk/content/8/17976.html>
From a 1996 NSA document: "This instruction incorporates a philosophy of
'risk management' in lieu of the 'risk avoidance' philosophy employed in
the previous document." And: "The emphasis is placed on 'detection' of
attempted penetration in lieu of 'prevention' of penetration." Good for them.
<https://cryptome.org/nstissi-7003.htm>
First-ever cross-platform virus: affects Windows and Linux:
<https://cgi.zdnet.com/slink?90268:8469234>
SEC may regulate Internet security:
<https://www.zdnet.com/zdnn/stories/news/0,4586,2703079,00.html>
An April Fools RFC about firewalls:
<https://www.isi.edu/in-notes/rfc3093.txt>
Another Internet Explorer hole. Normally, I wouldn't bother. But this
quote is telling: "Security 'is an ongoing issue with Internet Explorer
because it is such a complicated software that interoperates with many
other applications that it is too difficult to figure out all of these
vulnerabilities,' said Richard Smith, chief privacy officer at the
Denver-based nonprofit group the Privacy Foundation."
<https://yahoofin.cnet.com/news/0-1005-200-5399895.html>
Companies who sell security software exacerbate the problem, as they try to
sell software solutions to people problems. (I've been saying this for a
while.)
<https://news.cnet.com/news/0-1003-201-5404381-0.html?tag=owv>
Here's a hacking tool designed to sneak past IDSs:
<https://news.cnet.com/news/0-1003-200-5423454.html?tag=mn_hd>
Good article on the reality vs. hype of cyberterrorism:
<https://www.securityfocus.com/templates/article.html?id=184>
I've already written about the dangers of software auto-update
features. We now have a mass-market product instance of this. One day
ReplayTV updated itself to disable a valuable feature:
<https://www.asktog.com/columns/045ReplayTV.html>
** *** ***** ******* *********** *************
Counterpane Internet Security News
Counterpane announced more customers:
<https://www.counterpane.com/pr-marketexp.html>
Cisco supports Counterpane's Managed Security Monitoring:
<https://newsroom.cisco.com/dlls/corp_032601b.html>
Schneier will be speaking at the CATO Institute, in Washington DC, on April
19th:
<https://www.cato.org/events/010419bf.html>
Schneier will be speaking at BlackHat Asia, in Hong Kong (4/23) and
Singapore (4/26):
<https://www.blackhat.com/html/bh-asia-01/bh-asia-01-index.html>
Schneier will be speaking at the Investment Company Institute quarterly
meeting in Washington DC on May 2:
<https://www.ici.org/>
Schneier will be speaking at the BankLink annual conference in New York on
May 4:
<https://www.banklink.com/>
Schneier will be speaking at the Los Angeles ISSA Conference on May 10:
<https://www.issa-la.org/>
Schneier will be speaking at the New York ISSA Conference on May 17:
<https://www.nymissa.org/conference.html>
Schneier spoke on NPR about the difficulty of copy protection on personal
computers. You can listen to the audio of his talk on the web.
https://www.npr.org/ramfiles/atc/20010305.atc.12.rmm
** *** ***** ******* *********** *************
Fake Microsoft Certificates
Ah, the tribulations of PKI. Apparently someone impersonated Microsoft to
VeriSign, and got a couple of certificates in Microsoft's name. Oops.
This is a big deal. Microsoft has been pushing trusted code as a cure-all
for Internet viruses and other rogue programs. The idea would be that a
user could only allow signed code from trusted sources to run on his or her
computer. But if you can't trust the certificates, this all falls apart.
There are a lot of details that are unclear. Some news reports claim that
the certificates will expire in a year, others claim that they will be good
forever. (Expired certificates are all over the Web; almost nobody pays
attention.) VeriSign claims that they discovered the fraud almost
immediately, and that there has been no other fraud. While I agree that
their audit processes caught the problem in this case -- and I applaud
their going public with it so quickly -- I just don't believe they can be
sure more clever fraud remains unnoticed.
What's most interesting is that there is no way to revoke the certificates
(Windows has no CRL features), even though there have been rogue
certificates before. Revocation is critical to making PKI work, and it's
one of the major holes in most consumer PKI applications.
Microsoft has tried to paint this problem as 1) not a security
vulnerability, and 2) all VeriSign's fault anyway. But if it's not a
security vulnerability, why are they issuing all these security patches for
Internet Explorer?
News reports:
<https://www.upside.com/HardwareSoftware/3aba8dcfc.html>
<https://news.cnet.com/news/0-1003-200-5222484.html?tag=tp_pr>
Typical Microsoft evasiveness:
<https://www.microsoft.com/technet/security/bulletin/MS01-017.asp>
Especially interesting is the number of patches Microsoft needed to issue
to ensure that the revoked keys are actually revoked.
Ten Risks of PKI:
<https://www.counterpane.com/pki-risks.html>
** *** ***** ******* *********** *************
Comments from Readers
From: Bruce Peaslee <bpeaslee@ehsd.co.contra-costa.ca.us>
Subject: Security Patches
I used to religiously -- and automatically -- apply all security patches
for Microsoft Office, but no more! One of the more recent patches to
Outlook 2000 made it impossible to receive executable attachments. This
includes, of course, Microsoft Office files themselves. Thus I could not
e-mail myself an Excel file I was working on at work. I had to uninstall
then reinstall the whole system. I'm not sure what the moral is here, but
even if you do make the attempt, you can get unexpected and unsatisfactory
results.
From: "Mike O'Connor" <mjo@dojo.mi.org>
Subject: Security Patches
Vendors often release patches that do more than address a discrete security
issue. They'll incorporate other bug fixes and performance enhancements,
which potentially introduce new bugs, slowdowns, quirks, downtime,
integration issues, and security downfalls. There's often a sense that the
vendor knows what's best for you, even though they may have an imperfect
understanding of how you use their stuff at a higher level. Installing the
new patch in the name of security may not be the smartest thing to do if
the new "features" of the patch act as a DOS attack or compromise system
integrity.
In most cases, the guts of the system environments that need patching are
unknown quantities to administrators. They don't have the time, charter,
expertise, access to source and build environments, etc. to dig into what
makes their system environments tick at a low level. Admins gain
confidence in what they've built from their black boxes by looking at
high-level performance over time. When asked to apply the latest security
patches, what they see (when not swearing under their breath) is this
unknown amount of entropy potentially kicking them. "Simply" applying
patches takes LOTS of time when factoring in testing.
With all that in mind, I think we need to preach the following:
a) Vendors need to get it in their consciousness and processes that getting
security fixes into their products should be separate from getting other
stuff in their products. Admins need to get fixes from their vendors which
consist of EXACTLY the same as what they run today plus the one lone
security fix.
b) Admins need to be aware of what patches are out there and articulate
clearly when they don't run with them. A security patch should be treated
as a potentially actionable system event every bit as much as a "disk is
full" message. (That's where Counterpane can fit in, letting customers
know and tracking how they respond.)
Otherwise, the situation with patches will never get better.
From: "Dunn, Andrew (Reading)" <Andrew.Dunn@compaq.com>
Subject: Insurance
I don't normally respond to newsletters, but your article on insurance
really is very wide of the mark. I could write a treatise on why it is so
badly misconceived, but I don't have that much time to spare. Briefly your
fundamental error is to believe that insurance provides restitution for
loss. It does not.
Insurance provides financial compensation. Whatever the type of insurance,
the insuree obtains the insurance for events which he does not want to
happen. In some, but only a very few cases, those events relate to a
financial loss, such as, if I have my wallet stolen and I only have cash in
my wallet, and no other documents or credit cards. In most cases, the loss
is not primarily financial. If I have accident insurance, I receive cash
if I lose an arm, but it does not replace the arm.
It's exactly the same in business. Businesses do not exist purely to
generate cash, although that's an important, and necessary objective. All
businesses have other objectives than simply making the maximum profit in
the next quarter, regardless of what happens after that. Different
businesses have different objectives and core values. Some aim for
long-term growth, some aim for market leadership, some aim for technology
leadership.
If a business goes into liquidation because of a security hack, then
financial compensation does not replace the business. In the UK, we have a
classical example of why compensation does not make up for loss.
In the foot and mouth crisis, many farmers are having their entire
livestock destroyed after many years of building it up. This is a terrible
time for farmers despite the fact that they will receive financial
compensation. We are starting to hear the first cases of farmers
committing suicide. There is no greater loss than that. And insurance
does not have any play in this at all.
From: "Daniel A. Graifer" <dgraifer@cais.com>
Subject: Insurance
As an economist, let me comment on this:
>The real world doesn't work this way. Businesses achieve
>security through insurance. They take the risks they are
>not willing to accept themselves, bundle them up, and pay
>someone else to make them go away. If a warehouse is
>insured properly, the owner really doesn't care if it
>burns down or not. If he does care, he's underinsured.
In general with insurance, this is called the moral hazard problem. The
insurer figures that (in your example) the warehouse owner is the best
placed to be sure that fire safety regulations and common sense preventions
are carried out. That's why insurance policies have deductibles: to
encourage responsible action by preventing 100% coverage of the risk
leading to irresponsible behavior.
This is also related to the other big bugaboo of insurance: "adverse
selection": Insurance buyers have better knowledge of their risk
characteristics than the insurers, leading higher risk clients to
over-insure (because it's cheap relative to the risks) and low risk clients
to under-insure. That's why non-elective group insurance is cheaper than
individual policies in any risk category.
From: "John J. Adelsberger III" <jja@wallace.lusars.net>
Subject: Insurance
You have written many times about how mistaken you were when you used to
say that mathematics could solve security problems. This is good, because
you're right; mathematics alone will not solve any problem which is no
purely mathematical in nature.
However, the basic mistake you made was not assuming that math could
provide security. The basic mistake was the assumption that magic bullets
really exist, which solve hard problems neatly, cleanly, and without any
side effects.
You might find it interesting to read your own article on insurance over
again in light of this fact. You like to say "repeat after me." Well,
repeat after me: insurance is not a magic bullet. Magic bullets do not exist.
You wrote this:
>When I talk about this future at conferences, a common
>objection I hear is that premium calculation is
>impossible. Again, this is a technical mentality
>talking. Sure, insurance companies like well-understood
>risk profiles and carefully calculated premiums. But
>they also insure satellite launches and the palate of
>wine critic Robert Parker.
Here's how I can rewrite it to be a little less propagandistic: "When I
talk about this future at conferences, a common objection I hear is that it
is impossible to calculate rationally correct premiums free from subjective
whimsy. The answer is simple: premiums will be calculated
irrationally. Favoritism, corporate inertia, reputation, and an old boys
network will be as important as quality. The ability to pass meaningless
certifications with inflated prices and get face time with dough-faced
meeting monkeys will be what counts."
Doesn't sound quite so rosy this time, does it? And yet, you know that it
is the same thing you said: do you really believe there's a rational way to
value the palate of a wine critic up front?!
The irony here is that you think this will somehow dramatically improve the
quality of security products. In fact, it will do precisely the opposite:
it will make it "ok" to produce, sell, buy, and use lousy products. Even
experts will agree, in time. Do you know why most homes are so easy to
break into? Because there's no economic benefit to securing them; just
getting insurance is so much cheaper!
Since you mention premium manipulation as economic incentive to do the
right thing, here's something you'll never get an insurance guy to admit:
The real reason they toy with premiums for "good" customers is purely
marketing. It gets them more insurance sales revenue per policy over the
long run than they'd otherwise have. They don't really think they can
predict the odds of your house being broken into, much less the change in
those odds from installing deadbolts and shatter alarms. They just pretend
to do that because otherwise the government would nail them for illegal
price discrimination, product tying, and so on. They try, of course, to
have a rough idea of how many payouts they're going to have to make in a
year, but that's hardly at the level of one claim; that's just based on
gross statistics over the last x years in y area.
Insurance? Yes. It is coming. Panacea? Well, if you want to make a bet
on that, let me know. I'm always willing to supplement my income.
From: "Fred Renner" <frenner@columbus.rr.com>
Subject: Insurance
In the mid-80s I was trying to sell various electronic security-enhancing
technologies to companies that were being threatened or damaged
substantially by new vulnerabilities in both physical and virtual
worlds. In most of the commercial (versus government) enterprises the
response was leaning, just as you described, toward insurance as the
answer. So I approached some insurance companies with the idea that they
could reduce damage claims by encouraging or rewarding use of better
security practices and technologies. There was even historical precedent
in the insurance companies collectively "inventing" private fire
departments. What I learned during that brief and unrewarding sales effort
was informative as well as discouraging, at least for me.
The insurance industry is a deeply multilayered structure, with the first
layer of companies selling risk protection to consuming organizations or
individuals and buying risk protection from Layer #2 insurance companies,
which may spread it among a consortium of their peers or buy protection
policies from Layer #3. Pricing of all these exchanges is based on
industry-wide experiences. Consequently, the first-layer companies have no
immediate incentive to encourage the use of loss-reducing
technologies. Layers #2 and above are too far removed from the consuming
organizations to care about or influence their behavior.
The net result is a structure that effectively discourages using practices
to inhibit the criminal population and actually subsidizes them by "taxing"
the whole economy. In evolutionary terms, a steady food supply encourages
population growth and we are certainly seeing a growth in virtual
vermin. It may be an efficient solution in bean-counting terms but it has
always galled me. Wish I had an answer.
From: /dev/null <null@attrition.org>
Subject: Attrition.org's Web site Defacement Data
Hi, Bruce. In the recent Cryptogram, you referenced us:
>Security patches aren't being applied:
><https://www.zdnet.com/zdnn/stories/news/0,4586,2677878,00.html>
>Best quote: "Failing to responsibly patch computers led
>to 99 percent of the 5,823 Web site defacements last
>year, up 56 percent from the 3,746 Web sites defaced in
>1999, according to security group Attrition.org." I'm
>not sure how they know, but is scary nonetheless.
Shockingly enough, ZDNet put words in our mouth. The stats on defacements
are ours; that's what we do. We mirror defacements and provide statistical
information about them. We do not, however, speculate as to how or why
sites were defaced. Certainly, it's usually quite obvious to us what tools
and scripts defacers are using (i.e., last summer there was an astounding
spike in Red Hat Linux defacements, and most of those boxes defaced had
port 21 open...it's an easy guess that they were hit with wu-ftpd's
vulnerability); however, unless we have concrete evidence of some kind, we
don't speculate to the press on what vulnerability was used. At most, a
general statement could be made that the vast majority of defacements occur
due to known vulnerabilities that have not been patched or otherwise
defended against. We know of defacements that were compromised by weak
passwords, social engineering, or other unusual approaches, but by far most
of them are simply a matter of kids using available tools to exploit
well-known holes for which patches are available.
From: Greg Guerin <glguerin@amug.org>
Subject: Codesigning
In your February "Comments from Readers" section, Phillip Hallam-Baker
<hallam@ai.mit.edu> wrote:
>If the hackers circulated the private key to any great
>extent, the compromise would soon be known.
What if hundreds or thousands of different hackers are doing this? What if
an autonomous software agent (or hundreds of them) does it dozens of times
a day? How many humans are actually monitoring this kind of thing at
VeriSign (which actually issues and revokes the Authenticode code-signing
certificates, not Microsoft)? How long would it take for them to find out,
especially if the worm is not wreaking immediate havoc, but only sneaking
into place for a later or subtler exploit?
>The certificate would be revoked and would not be
>accepted by the Authenticode signing service for future
>code signing requests.
This implies that there is a centralized Authenticode service that hands
out single-use permissions to sign each individual piece of code; i.e.,
centralized approval of each signature created. I could be wrong, and I'd
happily plead ignorance, but I don't think Authenticode works that way. An
Authenticode signing key is just a VeriSign Class 3 code-signing key, and
any entity that has the private key and the cert can create the
Authenticode signature.
>Software that had already been signed would still pose
>a risk, but this could be controlled through warnings in
>the press.
Ah, yes, "warnings in the press," whose efficacy is so conveniently
illustrated in "The Security Patch Treadmill."
Another illustration is the recent incident of two Authenticode
certificates erroneously issued by VeriSign to someone posing as "Microsoft
Corporation." These certificates assert that the key-holder is "Microsoft
Corporation," and someone paid real money ($400/year each) to obtain
them. VeriSign's fraud monitoring eventually figured out the mistake, but
it took them six weeks after issuing the certificates. Even so, no
Microsoft software is capable of automatically obtaining the Certificate
Revocation List (CRL) listing those two bogus certs, because there's no
revocation infrastructure. Oops. Instead, Microsoft has to issue a patch
that will use a local CRL plus enable CRL-checking in all their
Authenticode-using programs. So once again, it comes down to "warnings in
the press." Will Microsoft do the same thing every time a certificate
needs to be revoked before its stated expiry date?
>In the future it is likely that a higher level of
>security will be possible in enterprise
>configurations. Ideally each software installation
>would be referred to a central service for prior
>approval.
Something like a centralized service that can approve or decline the
execution of every executable on every known client machine in the
enterprise. Something like, say, SecureEXE:
<https://www.securewave.com/ftp/free/SecureEXE%20WP.pdf>
<https://www.securewave.com/products/secureexe/faq-exe.html>
Something like this is essentially powerless over buffer overrun exploits,
scripting or macro viruses, interpreted executables like Java, or directly
interpreted languages like Python, Perl, etc. In short, something like
this would only work on roughly half the worms/viruses out there. The
likely effect of this central service would be little more than to change
the balance of how new worms and viruses are written.
From: Steven Bass <sbass@speakeasy.org>
Subject: 802.11 Standard Security
I think your analysis of the 802.11 flaw is flawed. Yes, an analysis by a
security expert would have probably caught this error and produced a better
solution. But this was not a closed process.
The American standard processes, whether by IEEE, ANSI, ASTM, or one of the
many industry fora, are essentially volunteer processes. As an attendee at
a number of ANSI and IEEE standards meetings, I can say that while they are
difficult, often highly political, and frequently incomprehensible to new
attendees, the people there are generally open to contributions from anyone
interested in contributing. My guess is that the people who implemented
the WEP security were well-meaning people not versed in security; they did
the best they could to develop a solution within the constraints they
faced. At the time, they were limited to 40-bit crypto for export. They
may have had proprietary solutions in the field which needed to remain
compatible. So they took their best shot and drew up something.
While the standards are under development, many working documents are
freely available (for example the 802.11 documents are at
<https://grouper.ieee.org/groups/802/11/Documents/index.html>), so any
cryptographer with interest could follow the process by reading the
documents and provide feedback at no cost.
When released, they do cost money, as that is a primary funding mechanism
for IEEE and ANSI. But this is a red herring; once the standard is
released, it is largely too late to change things.
While the standard committees are open to anyone, they are largely driven
by corporate interests. Corporation have both money and a vested interest
in the outcome. This has worked pretty well in the past, by forcing
competing manufacturers, network system designers, and large corporate
users to build consensus.
The real question behind your comment is what is the best way to create
standards. As a volunteer process, if there are no voices representing a
particular viewpoint, it isn't heard.
Even if all standards in development were posted for free on the Web, it
wouldn't help much. 802.11 was a many-year effort, at times a real mess
trying to merge pre-existing solutions into a standard. For many years it
was considered to be a failure that would never be released. For a long
time after it was released many thought it doomed to niche status by one
thing or another (Hiperlan, Bluetooth, or something else).
And this was just one of dozens (if not hundreds) of standards that have
been developed or are being developed for networking systems (and don't
forget busses, disk drive protocols, content rights systems, computer
architectures, Web protocols, etc.). Since it is impossible to know which
will win in the marketplace, and once one does it is too late to easily fix
the security problems, that means security experts need to be part of EVERY
potentially relevant standard effort.
Don't blame the openness of the process, or the cost of the
standard. They're not the problem here. There are simply too many
different standards under development, and not enough people with the
knowledge, interest, or time to analyze them all.
From: Michael Rabin <rabin@cs.nyu.edu>
Subject: Ding-Rabin Provably Unbreakable Encryption
This is a brief response to "Harvard's 'Uncrackable' Crypto." Many experts
who heard lectures and read the paper were enthusiastic. Some of the less
informed reactions fit well into the following mold. William Cruikshank,
writing in 1790 about human physiology said (not that hyper encryption is
as important as Harvey's work): When Harvey discovered the circulation of
the blood, his opponents first attempted to prove that he was mistaken; but
finding this ground untenable, they then asserted that it was known long
before...; when this failed they once more shifted their ground and said
the discovery was of no use.
Harvard Univ. or its public relations department had nothing to do with the
publicity for hyper-encryption.
Maurer proposed the bounded storage model. However, he does not have a
practical method which is provably unbreakable. He has suggested putting
us in touch with people to commercialize our work, but we are not following
this now.
That encryption is vitally important is evidenced by the hundreds of
millions spent on encryption technology and the billions spent on
code-breaking. Once discovered, the provably secure encryption will be
used sooner rather than later.
The proposal to store the bit stream by bouncing it between satellites is
infeasible because the mirroring method allows storage of about a second's
worth of bit transmission, while the adversary must store several weeks or
even years worth of the stream.
Full answers to various questions is to be found in the site pointed to in
my and Yan Zong Ding's web pages at Harvard DEAS.
** *** ***** ******* *********** *************
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses,
insights, and commentaries on computer security and cryptography.
To subscribe, visit <https://www.counterpane.com/crypto-gram.html> or send a
blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe,
visit <https://www.counterpane.com/unsubform.html>. Back issues are
available on <https://www.counterpane.com>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will
find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as
it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of
Counterpane Internet Security Inc., the author of _Secrets and Lies_ and
_Applied Cryptography_, and an inventor of the Blowfish, Twofish, and
Yarrow algorithms. He served on the board of the International Association
for Cryptologic Research, EPIC, and VTW. He is a frequent writer and
lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is a venture-funded company bringing
innovative managed security solutions to the enterprise.
<https://www.counterpane.com/>
Copyright (c) 2001 by Counterpane Internet Security, Inc.