0% found this document useful (0 votes)
54 views10 pages

Gray Hat Hacking 91 100

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views10 pages

Gray Hat Hacking 91 100

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 3: Proper and Ethical Disclosure

63
code is constantly changing, re-creating the vulnerability can be difficult. And, in these
instances, disclosing these vulnerabilities might not reduce the risk of them being ex-

PART I
ploited. Some are skeptical about using traditional vulnerability disclosure channels
for vulnerabilities identified in website code.
Legally, website code may differ from typical software bugs, too. A software applica-
tion might be considered the user’s to examine for bugs, but posting proof of discovery
of a vulnerable Web system could be considered illegal because it isn’t purchased like a
specific piece of software is. Demonstrating proof of a web vulnerability may be consid-
ered an unintended use of the system and could create legal issues for a vulnerability
researcher. For a researcher, giving up proof-of-concept exploit code could also mean
handing over evidence in a future hacking trial—code that could be seen as proof the
researcher used the website in a way the creator didn’t intend.
Disclosing web vulnerabilities is still in somewhat uncharted territory, as the infra-
structure for reporting these bugs, and the security teams working to fix them, are still
evolving. Vulnerability reporting for traditional software is still a work in progress, too.
The debate between full disclosure versus partial or no disclosure of bugs rages on.
Though vulnerability disclosure guidelines exist, the models are not necessarily keep-
ing pace with the constant creation and discovery of flaws. And though many disclosure
policies have been written in the information security community, they are not always
followed. If the guidelines aren’t applied to real-life situations, chaos can ensue.
Public disclosure helps improve security, according to information security expert
Bruce Schneier. He says that the only reason vendors patch vulnerabilities is because of
full disclosure, and that there’s no point in keeping a bug a secret—hackers will dis-
cover it anyway. Before full disclosure, he says, it was too easy for software companies
to ignore the flaws and threaten the researcher with legal action. Ignoring the flaws was
easier for vendors especially because an unreported flaw affected the software’s users
much more than it affected the vendor.
Security expert Marcus Ranum takes a dim view of public disclosure of vulnerabili-
ties. He says that an entire economy of researchers is trying to cash in on the vulnera-
bilities that they find and selling them to the highest bidder, whether for good or bad
purposes. His take is that researchers are constantly seeking fame and that vulnerability
disclosure is “rewarding bad behavior,” rather than making software better.
But the vulnerability researchers who find and report bugs have a different take,
especially when they aren’t getting paid. Another issue that has arisen is that gray hats
are tired of working for free without legal protection.

“No More Free Bugs”


In 2009, several gray hat hackers—Charlie Miller, Alex Sotirov, and Dino Dai Zovi—
publicly announced a new stance: “No More Free Bugs.” They argue that the value of
software vulnerabilities often doesn’t get passed on to gray hats, who find legitimate,
serious flaws in commercial software. Along with iDefense and ZDI, the software
Gray Hat Hacking, The Ethical Hacker’s Handbook, Third Edition

64
vendors themselves have their own employees and consultants who are supposed to
find and fix bugs. (“No More Free Bugs” is targeted primarily at the for-profit software
vendors that hire their own security engineer employees or consultants.)
The researchers involved in “No More Free Bugs” also argue that gray hat hackers
are putting themselves at risk when they report vulnerabilities to vendors. They have no
legal protection when they disclose a found vulnerability—so they’re not only working
for free, but also opening themselves up to threats of legal action, too. And, gray hats
don’t often have access to the right people at the software vendor, those who can create
and release the necessary patches. For many vendors, vulnerabilities mainly represent
threats to their reputation and bottom line, and they may stonewall researchers’ over-
tures, or worse. Although vendors create responsible disclosure guidelines for research-
ers to follow, they don’t maintain guidelines for how they treat the researchers.
Furthermore, these researchers say that software vendors often depend on them to
find bugs rather than investing enough in finding vulnerabilities themselves. It takes a
lot of time and skill to uncover flaws in today’s complex software and the founders of
the “No More Free Bugs” movement feel as though either the vendors should employ
people to uncover these bugs and identify fixes or they should pay gray hats who un-
cover them and report them responsibly.
This group of gray hats also calls for more legal options when carrying out and re-
porting on software flaws. In some cases, gray hats have uncovered software flaws and
the vendor has then threatened these individuals with lawsuits to keep them quiet and
help ensure the industry did not find out about the flaws. Table 3-1, taken from the
website http://attrition.org/errata/legal_threats/, illustrates different security flaws that
have been uncovered and the responding resolution or status of report.
Of course, along with iDefense and ZDI’s discovery programs, some software ven-
dors do guarantee researchers they won’t pursue legal action for reporting vulnerabili-
ties. Microsoft, for example, says it won’t sue researchers “that responsibly submit
potential online services security vulnerabilities.” And Mozilla runs a “bug bounty pro-
gram” that offers researchers a flat $500 fee (plus a t-shirt!) for reporting valid, critical
vulnerabilities. In 2009, Google offered a cash bounty for the best vulnerability found
in Native Client.
Although more and more software vendors are reacting appropriately when vul-
nerabilities are reported (because of market demand for secure products), many peo-
ple believe that vendors will not spend the extra money, time, and resources to carry
out this process properly until they are held legally liable for software security issues.
The possible legal liability issues software vendors may or may not face in the future is
a can of worms we will not get into, but these issues are gaining momentum in the
industry.
Chapter 3: Proper and Ethical Disclosure

65
When Company Researchers Research Resolution/
Making Threat Topic Status

PART I
2009-07-18 RSA Scott Jarkoff Lack of SSL on C&D* sent to Mr.
Navy Federal Jarkoff and his web
Credit Union host. Information
Home Page still available online
(2009-08-12).
2009-07-17 Comerica Bank Lance James XSS/phishing C&D sent to Tumblr,
vulnerabilities on information removed
Comerica site but vulnerability
still present (2009-
07-17).
2008-08-13 Sequoia Voting Ed Felten Voting machine Research still not
Systems audit published (2008-
10-02).
2008-08-09 Massachusetts Bay Zach Anderson, Electronic fare Gag order lifted,
Transit Authority RJ Ryan, and payment (Charlie researchers hired
(MBTA) Alessandro Chiesa Card/Charlie by MBTA.
Ticket)
2008-07-09 NXP (formerly Philips Radboud University Mifare Classic Research published.
Semiconductors) Nijmegen card chip security
2007-12-06 Autonomy Corp., Secunia KeyView Research published.
PLC vulnerability
research
2007-07-29 U.S. Customs Halvar Flake Security training Researcher denied
material entry into U.S.,
training cancelled
last minute.
2007-04-17 BeThere (Be Un Sid Karunaratne Publishing ISP Researcher still in
limited) router backdoor talks with BeThere,
information passwords redacted,
patch supplied,
ISP service not
restored (2007-
07-06).
2007-02-27 HID Global Chris Paget/ RFID security Talk pulled, research
IOActive problems not published.
2007-??-?? TippingPoint David Maynor/ Reversing Unknown: appears
Technologies, Inc. ErrataSec TippingPoint rule threats and FBI visit
set to discover stifled publication.
vulnerabilities
2005-07-29 Cisco Systems, Inc. Mike Lynn/ISS Cisco router Resigned from ISS
vulnerabilities before settlement,
gave BlackHat
presentation, future
disclosure injunction
agreed on.
2005-03-25 Sybase, Inc. Next-Generation Sybase Database Threat dropped,
Security Software vulnerabilities research published.
Table 3-1 Vulnerability Disclosures and Resolutions
Gray Hat Hacking, The Ethical Hacker’s Handbook, Third Edition

66
When Company Researchers Research Resolution/
Making Threat Topic Status
2003-09-30 Blackboard Billy Hoffman and Blackboard issued Confidential
Transaction System Virgil Griffith C&D to Interz0ne agreement reached
conference, filed between Hoffman,
complaint against Griffith, and
students Blackboard.
2002-07-30 Hewlett-Packard SNOsoft Tru64 Unix OS Vendor/researcher
Development vulnerability, agree on future
Company, L.P. (HP) DMCA-based timeline; additional
threat Tru64 vulnerabilities
published; HP asks
Neohapsis for
OpenSSL exploit
code shortly after.
2001-07-16 Adobe Systems Dmitry Sklyarov & Adobe eBook ElcomSoft found
Incorporated ElcomSoft AEBPR Bypass not guilty.
2001-04-23 Secure Digital Music Ed Felten Four watermark Research published
Initiative (SDMI), protection at USENIX 2001.
Recording Industry schemes bypass,
Association of DMCA-based
America (RIAA) and threat
Verance Corporation
2000-08-17 Motion Picture 2600: The Hacker DVD encryption DeCSS ruled “not a
Association of Quarterly breaking software trade secret.”
America (MPAA) & (DeCSS)
DVD Copy Control
Association (DVD
CCA)

C&D stands for cease and desist.


Table 3-1 Vulnerability Disclosures and Resolutions (continued)

References
Full Disclosure of Software Vulnerabilities a “Damned Good Idea,” January 9,
2007 (Bruce Schneier) www.csoonline.com/article/216205/Schneier_Full_
Disclosure_of_Security_Vulnerabilities_a_Damned_Good_Idea_
IBM Internet Security Systems Vulnerability Disclosure Guidelines (X-Force team)
ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/sel03008usen/SEL03008USEN.PDF
Mozilla Security Bug Bounty Program
http://www.mozilla.org/security/bug-bounty.html
No More Free Bugs (Charlie Miller, Alex Sotirov, and Dino Dai Zovi)
www.nomorefreebugs.com
Software Vulnerability Disclosure: The Chilling Effect, January 1, 2007
(Scott Berinato) www.csoonline.com/article/221113/Software_Vulnerability_
Disclosure_The_Chilling_Effect?page=1
The Vulnerability Disclosure Game: Are We More Secure?, March 1, 2008 (Marcus
J. Ranum) www.csoonline.com/article/440110/The_Vulnerability_Disclosure_
Game_Are_We_More_Secure_?CID=28073
Chapter 3: Proper and Ethical Disclosure

67
Case Studies

PART I
The fundamental issue that this chapter addresses is how to report discovered vulnera-
bilities responsibly. The issue sparks considerable debate and has been a source of con-
troversy in the industry for some time. Along with a simple “yes” or “no” to the ques-
tion of whether there should be full disclosure of vulnerabilities to the public, other
factors should be considered, such as how communication should take place, what is-
sues stand in the way of disclosure, and what experts on both sides of the argument are
saying. This section dives into all of these pressing issues, citing recent case studies as
well as industry analysis and opinions from a variety of experts.

Pros and Cons of Proper Disclosure Processes


Following professional procedures in regard to vulnerability disclosure is a major issue
that should be debated. Proponents of disclosure want additional structure, more rigid
guidelines, and ultimately more accountability from vendors to ensure vulnerabilities
are addressed in a judicious fashion. The process is not so cut and dried, however. There
are many players, many different rules, and no clear-cut winners. It’s a tough game to
play and even tougher to referee.

The Security Community’s View


The top reasons many bug finders favor full disclosure of software vulnerabilities are:

• The bad guys already know about the vulnerabilities anyway, so why not
release the information to the good guys?
• If the bad guys don’t know about the vulnerability, they will soon find out
with or without official disclosure.
• Knowing the details helps the good guys more than the bad guys.
• Effective security cannot be based on obscurity.
• Making vulnerabilities public is an effective tool to use to make vendors
improve their products.

Maintaining their only stronghold on software vendors seems to be a common


theme that bug finders and the consumer community cling to. In one example, a cus-
tomer reported a vulnerability to his vendor. A full month went by with the vendor ig-
noring the customer’s request. Frustrated and angered, the customer escalated the issue
and told the vendor that if he did not receive a patch by the next day, he would post the
full vulnerability on a user forum web page. The customer received the patch within
one hour. These types of stories are very common and continually introduced by the
proponents of full vulnerability disclosure.

The Software Vendors’ View


In contrast, software vendors view full disclosure with less enthusiasm:

• Only researchers need to know the details of vulnerabilities, even specific


exploits.
Gray Hat Hacking, The Ethical Hacker’s Handbook, Third Edition

68
• When good guys publish full exploitable code they are acting as black hats
and are not helping the situation, but making it worse.
• Full disclosure sends the wrong message and only opens the door to more
illegal computer abuse.

Vendors continue to argue that only a trusted community of people should be privy
to virus code and specific exploit information. They state that groups such as the AV
Product Developers’ Consortium demonstrate this point. All members of the consor-
tium are given access to vulnerability information so research and testing can be done
across companies, platforms, and industries. They do not feel that there is ever a need
to disclose highly sensitive information to potentially irresponsible users.

Knowledge Management
A case study at the University of Oulu titled “Communication in the Software Vulner-
ability Reporting Process” analyzed how the two distinct groups (reporters and receiv-
ers) interacted with one another and worked to find the root cause of breakdowns. The
researchers determined that this process involved four main categories of knowledge:

• Know-what
• Know-why
• Know-how
• Know-who

The know-how and know-who are the two most telling factors. Most reporters don’t
know who to call and don’t understand the process that should be followed when they
discover a vulnerability. In addition, the case study divides the reporting process into
four different learning phases, known as interorganizational learning:

• Socialization stage When the reporting group evaluates the flaw internally
to determine if it is truly a vulnerability
• Externalization phase When the reporting group notifies the vendor
of the flaw
• Combination phase When the vendor compares the reporter’s claim with its
own internal knowledge of the product
• Internalization phase The receiving vendors accepting the notification and
pass it on to their developers for resolution

One problem that apparently exists in the reporting process is the disconnect—and
sometimes even resentment—between the reporting party and the receiving party. Com-
munication issues seem to be a major hurdle for improving the process. From the case
study, researchers learned that over 50 percent of the receiving parties who had received
potential vulnerability reports indicated that less than 20 percent were actually valid. In
these situations, the vendors waste a lot of time and resources on bogus issues.
Chapter 3: Proper and Ethical Disclosure

69
Publicity The case study at the University of Oulu included a survey that asked the
question whether vulnerability information should be disclosed to the public, although

PART I
the question was broken down into four individual statements that each group was
asked to respond to:

• All information should be public after a predetermined time.


• All information should be public immediately.
• Some part of the information should be made public immediately.
• Some part of the information should be made public after a predetermined
time.

As expected, the feedback from the questions validated the assumption that there is
a decidedly marked difference of opinion between the reporters and the vendors. The
vendors overwhelmingly feel that all information should be made public after a prede-
termined time and feel much more strongly about all information being made imme-
diately public than the receivers.

The Tie That Binds To further illustrate the important tie between reporters and
vendors, the study concluded that the reporters are considered secondary stakeholders
of the vendors in the vulnerability reporting process. Reporters want to help solve the
problem, but are treated as outsiders by vendors. The receiving vendors often consider
it to be a sign of weakness if they involve a reporter in their resolution process. The
concluding summary was that both participants in the process rarely have standard
communications with one another. Ironically, when asked about ways to improve the
process, both parties indicated that they thought communication should be more in-
tense. Go figure!

Team Approach
Another study, titled “The Vulnerability Process: A Tiger Team Approach to Resolving
Vulnerability Cases,” offers insight into the effective use of teams within the reporting
and receiving parties. To start, the reporters implement a tiger team, which breaks the
functions of the vulnerability reporter into two subdivisions: research and manage-
ment. The research team focuses on the technical aspects of the suspected flaw, while
the management team handles the correspondence with the vendor and ensures proper
tracking.
The tiger team approach breaks down the vulnerability reporting process into the
following lifecycle:

1. Research Reporter discovers the flaw and researches its behavior.


2. Verification Reporter attempts to re-create the flaw.
3. Reporting Reporter sends notification to receiver giving thorough details
about the problem.
4. Evaluation Receiver determines if the flaw notification is legitimate.
Gray Hat Hacking, The Ethical Hacker’s Handbook, Third Edition

70
5. Repairing Solutions are developed.
6. Patch evaluation The solution is tested.
7. Patch release The solution is delivered to the reporter.
8. Advisory generation The disclosure statement is created.
9. Advisory evaluation The disclosure statement is reviewed for accuracy.
10. Advisory release The disclosure statement is released.
11. Feedback The user community offers comments on the vulnerability/fix.

Communication When observing the tendencies of reporters and receivers, the


case study researchers detected communication breakdowns throughout the process.
They found that factors such as holidays, time zone differences, and workload issues
were most prevalent. Additionally, it was concluded that the reporting parties were
typically prepared for all their responsibilities and rarely contributed to time delays.
The receiving parties, on the other hand, often experienced lag time between phases
mostly due to difficulties spreading the workload across a limited staff. This finding
means the gray hats were ready and willing to be a responsible party in this process but
the vendor stated that it was too busy to do the same.
Secure communication channels between reporters and receivers should be estab-
lished throughout the lifecycle. This requirement sounds simple, but, as the research
team discovered, incompatibility issues often made this task more difficult than it ap-
peared. For example, if the sides agree to use encrypted e-mail exchange, they must
ensure they are using similar protocols. If different protocols are in place, the chances
of the receiver simply dropping the task greatly increase.

Knowledge Barrier There can be a huge difference in technical expertise between


a receiver (vendor )and a reporter (finder), making communication all the more diffi-
cult. Vendors can’t always understand what finders are trying to explain, and finders can
become easily confused when vendors ask for more clarification. The tiger team case
study found that the collection of vulnerability data can be quite challenging due to
this major difference. Using specialized teams with specific areas of expertise is strong-
ly recommended. For example, the vendor could appoint a customer advocate to inter-
act directly with the finder. This party would be the middleman between engineers and
the customer/finder.

Patch Failures The tiger team case also pointed out some common factors that
contribute to patch failures in the software vulnerability process, such as incompatible
platforms, revisions, regression testing, resource availability, and feature changes.
Additionally, researchers discovered that, generally speaking, the lowest level of
vendor security professionals work in maintenance positions—and this is usually the
group who handles vulnerability reports from finders. The case study concluded that a
lower quality patch would be expected if this is the case.

Vulnerability Remains After Fixes Are in Place


Many systems remain vulnerable long after a patch/fix is released. This happens for
several reasons. The customer is currently and continually overwhelmed with the num-
Chapter 3: Proper and Ethical Disclosure

71
ber of patches, fixes, updates, versions, and security alerts released each and every day.
This is the motivation behind new product lines and processes being developed in the

PART I
security industry to deal with “patch management.” Another issue is that many of the
previously released patches broke something else or introduced new vulnerabilities
into the environment. So although we can shake our fists at network and security ad-
ministrators who don’t always apply released fixes, keep in mind the task is usually
much more difficult than it sounds.

Vendors Paying More Attention


Vendors are expected to provide foolproof, mistake-free software that works all the
time. When bugs do arise, they are expected to release fixes almost immediately. It is
truly a double-edged sword. However, the common practice of “penetrate and patch”
has drawn criticism from the security community as vendors simply release multiple
temporary fixes to appease users and keep their reputations intact. Security experts ar-
gue that this ad-hoc methodology does not exhibit solid engineering practices. Most
security flaws occur early in the application design process. Good applications and bad
applications are differentiated by six key factors:

• Authentication and authorization The best applications ensure that


authentication and authorization steps are complete and cannot be
circumvented.
• Mistrust of user input Users should be treated as “hostile agents” as data
is verified on the server side and strings are stripped of tags to prevent buffer
overflows.
• End-to-end session encryption Entire sessions should be encrypted, not
just portions of activity that contain sensitive information. In addition, secure
applications should have short timeout periods that require users to re-
authenticate after periods of inactivity.
• Safe data handling Secure applications will also ensure data is safe while
the system is in an inactive state. For example, passwords should remain
encrypted while being stored in databases and secure data segregation should
be implemented. Improper implementation of cryptography components
have commonly opened many doors for unauthorized access to sensitive data.
• Eliminating misconfigurations, backdoors, and default settings A
common but insecure practice for many software vendors is to ship software
with backdoors, utilities, and administrative features that help the receiving
administrator learn and implement the product. The problem is that these
enhancements usually contain serious security flaws. These items should
always be disabled and require that the customer enable them, and all
backdoors should be properly extracted from source code.
• Security quality assurance Security should be a core discipline when
designing the product, during specification and development phases, and
during testing phases. Vendors who create security quality assurance teams
(SQA) to manage all security-related issues are practicing due diligence.
Gray Hat Hacking, The Ethical Hacker’s Handbook, Third Edition

72
So What Should We Do from Here on Out?
We can do several things to help improve the security situation, but everyone involved
must be more proactive, better educated, and more motivated. The following are some
items that should be followed if we really want to make our environments more secure:

• Act up It is just as much the consumers’ responsibility, as it is the


developers’, to ensure a secure environment. Users should actively seek
out documentation on security features and ask for testing results from
the vendor. Many security breaches happen because of improper customer
configurations.
• Educate application developers Highly trained developers create more
secure products. Vendors should make a conscious effort to train their
employees in the area of security.
• Access early and often Security should be incorporated into the design
process from the early stages and tested often. Vendors should consider
hiring security consulting firms to offer advice on how to implement security
practices into the overall design, testing, and implementation processes.
• Engage finance and audit Getting the proper financing to address security
concerns is critical in the success of a new software product. Engaging budget
committees and senior management at an early stage is critical.

iDefense and ZDI


iDefense is an organization dedicated to identifying and mitigating software vulnera-
bilities. Founded in August 2002, iDefense started to employ researchers and engineers
to uncover potentially dangerous security flaws that exist in commonly used computer
applications throughout the world. The organization uses lab environments to re-create
vulnerabilities and then works directly with the vendors to provide a reasonable solu-
tion. iDefense’s Vulnerability Contributor Program (VCP) has pinpointed more than
10,000 vulnerabilities, of which about 650 were exclusively found by iDefense, within
a long list of applications. They pay researchers up to $15,000 per vulnerability as part
of their main program.
The Zero-Day Initiative (ZDI) has joined iDefense in the vulnerability reporting
and compensation arena. ZDI, founded by the same people who founded iDefense’s
VCP, claims 1,179 researchers and more than 2,000 cases have been created since their
August 2005 launch.
ZDI offers a web portal for researchers to report and track vulnerabilities. They per-
form identity checks on researchers who report vulnerabilities, including checking that
the researcher isn’t on any government “do not do business with” lists. ZDI then vali-
dates the bug in a security lab before offering the researcher a payment and contacting
the vendor. ZDI also maintains its Intrusion Prevention Systems (IPS) program to write
filters for whatever customer areas are affected by the vulnerability. The filter descrip-
tions are designed to protect customers, but remain vague enough to keep details of the
unpatched flaw secret. ZDI works with the vendor on notifying the public when the
patch is ready, giving the researcher credit if he or she requests it.

You might also like