Posted: September 17th, 2017
Cite the reference provided as per APA guidelines
Check for errors
Provide a 275-word discussion to Information Security and Infrastructure Protection provided below
After completing this chapter, you should be able to
• Understand the concept of risk as applied to information security and infrastructure protection.
• Discuss the major principles of risk analysis.
• Identify and define the primary security technologies used to protect information.
• Discuss the various functions of firewalls, and identify their limitations.
• Define encryption, and discuss its use in terms of authenticity, integrity, and confidentiality.
• Identify and explain some of the security vendor technologies used today to secure information.
Risk analysis will always be an art informed by science. We cannot know all possible outcomes and weigh them rationally. Risk analysis involves projecting the most probable outcome and allocating available resources to address that outcome. At the same time, a risk analyst must remember that assets (computers, networks, etc.) were purchased to fulfill a mission. If risk management strategies substantially interfere with that mission, then the assets are no better off than if they had been compromised through a security-related risk.
This section introduces the concept of risk by discussing several epochs of computer development. Each era presents its own risk and at least somewhat functional responses to that risk. Early decisions weighing the risk of computers not providing a useful function against potential or unknowable future security threats produced results that we still live with today. It is easy to criticize early decisions based on our knowledge of the outcomes, but even with hindsight, we may fail to see that the benefit provided greatly outweighs the harm. In fact, some decisions that have produced security vulnerabilities were absolutely essential to the basic functioning of computers and networks for their intended purposes.
Mastering the Technology and the Environment
In the earliest days of computing, before extensive networking and multiple user systems, the primary problem faced by users was the technology itself. During these early days, programmers created the computer functions that we take for granted. 1 Early innovations included interactive operation (rather than batch processing and output), rudimentary networking, graphics, tools and utilities, and so on. 2 In many cases, the primary limitation was the capacity of the hardware. Limitations imposed by operating memory, storage, and processing speed each forced adaptations. The net effect was the absence of security. At the time, physical security (i.e., locked doors) was sufficient to protect computing resources. The primary concern of system architects was the expansion of useful function and overcoming hardware limitations. Although decisions made at this early point would later have negative effects on security, they were really unavoidable.
As technologies matured and found supporters in mainstream business, the computer moved from research platform to business tool. Complex software was created for business, and essential functions were transferred from armies of clerks to computer systems. Such moves were always in one direction. It is impossible to reemploy clerical staff and reimplement paper-based procedures once the existing system is gone. Further, the cost savings of computers make such a backward move unlikely. This placed new emphasis on availability of data and recovery from errors and disasters. Computer centers were created to concentrate technical expertise and provide a controlled environment in which to maximize the availability of computing resources. Innovations in fire suppression, 3 efficient environmental controls, 4 and administrative procedures (i.e., backup schedules) gave reasonable assurance against disaster.
The user was undeniably part of the computing environment. During this era, legitimate users were the primary human threat to computers. More harm was caused by failure to properly maintain systems and backup schedules than from intrusion or malicious intent. When malicious intent played a part, it was typically on the part of an insider. 5 Although there are documented cases of intrusion and loss, a much greater threat came from the relative scarcity of experts to operate and maintain systems. Once again, the operational need for availability was more pressing than security.
Personal Computers and Intruders
Although recreational system intrusion was not unknown in the previous era, it was largely restricted by access to computers. Few people had access, and fewer still had the skill to break through the rudimentary security on most systems. Those that did were often deeply invested in terms of time and resources spent to acquire that knowledge. 6 Recreational intrusion was a minor problem at best. The advent of the home computer in 1975 marked the beginning of the democratization of computing. It also marked the movement of hacking from the old-school era to the bedroom hacker era (see Chapter 4). 7 By the end of the 1970s, the restraint of peers and the investment in knowledge no longer provided reasonable protection against malicious users.
In this era, intruders sought knowledge and resources to continue their use of computers. Much of the literature is devoted to detailing the social connections or lack thereof among intruders and hackers. 8 Most pundits resort to the myth of the hacker as loner and contentious in interaction with other hackers. 9 An often-overlooked facet of the hacker culture is the need for information and resources. In the early era of intrusion, access to other computers required a phone connection, usually to a long-distance number. Thus, the search for access to resources and knowledge of how to exploit them dominated the vast majority of hacker interactions.
For the first time, intruders became a significant threat to routine computer use. Although efforts were made to secure computers, long-standing demands for the availability of computing resources and the expansion of computer capabilities simply eclipsed demands for security. Hollinger and Lanza-Kaduce provide one of the very few significant criminological works describing the efforts to supplement the computer industries’ meager efforts toward security with law. 10 Various states and the federal government passed laws in the hopes of deterring would-be computer criminals and punishing those who were caught.
The Internet Explosion
The explosive growth of the Internet has been the subject of numerous books and articles in the popular press, 11 scholarly publications of general interest, 12 and works of technical research. 13 The dramatic influx of new users to computer networks has burdened both the technical infrastructure and the social cohesion of online communities. 14 The loss of social cohesion of the computer underground gave rise to script kiddies, low-skilled network intruders with little desire to pursue the traditional goals of hackers (see Chapter 4). At the same time, the influx of new users, also with low levels of skill, gave script kiddies and other larval hackers a rich field of targets. Unless a computer system holds particular interest (politically—like the World Trade Organization; technologically; or as a trophy—like NASA or the Pentagon), the most likely threat comes from script kiddies.
During this period, the amount of computerized data, such as bank records, personal information, and other electronic files, increased. Businesses and financial institutions store sensitive customer information in massive electronic databases that can be accessed and compromised by hackers. The increased use of online banking and shopping sites also allow consumers to transmit sensitive personal and financial information over the Internet. This created more attractive targets for criminals to engage in identity theft, fraud, and espionage.
In turn, the Internet increased the availability and proliferation of hacker tools and data and the professionalization of the hacker community. The recent emergence of malicious software markets and communities that engender the sale of stolen information as outlined in Chapters 5 and 6 makes it significantly easier for hackers to gain access to very sophisticated tools with little to no understanding of how they function. Such tools existed in the previous era, but were not as widely distributed or easily accessed. As a result, the global landscape of threats from hacking has changed dramatically, leading accomplished network intruders to offer their services for hire to unskilled hackers, terrorists, and organized crime groups. This has changed the way threats are perceived, though it is clear that the potential of the Internet to find and retrieve information from almost any jurisdiction has made it unlikely that a single nation’s efforts could remove these tools from wide circulation. It is also undesirable to remove technical information from our networks; technological advancement depends on the open exchange of ideas. 15
Principles of Risk Analysis
Risk analysis is performed at many levels and with many degrees of detail. Risk analysis services are provided by Fortune 500 consulting firms like Deloitte & Touche, Ernst & Young, KPMG, and many others. Reports from a major risk analysis can cover every aspect of business and run into thousands of pages. Risk analysis may also be performed by small organizations or even individuals in the course of their businesses. Such informal decision making does not follow the comprehensive steps of the more elaborate analyses. Deciding where to draw the line on the depth of a risk analysis is an art with no clear standard. The principles of risk analysis presented here are broad, but examples are narrowly tailored to illustrate network security risks and remedies. Above all, this process shows that there is no single solution to security. The effort required to eliminate all risks (even if that is possible) would surely overwhelm an organization. All known factors relevant to an organization should be weighed so that a specific level of acceptable risk can be matched to a risk management strategy.
Assessment and Evaluation
The first step of risk analysis is to correctly assess and evaluate the existing information technology (IT) systems within an organization. Unfortunately, those who often conduct risk analysis studies tend to focus on external vulnerabilities and threats. Most losses of information come from inside the organization and through relatively simple security failures of existing information systems and networks. This assessment must include an evaluation of the organizational, managerial, and administrative procedures directly relevant to IT systems. Hopefully, the organization has and uses a well-developed and well-defined IT plan. This plan should include information relating to the acquisition and purchase of future IT equipment and systems, as well as a strategy for expanding information security parameters as the system grows or changes. Interestingly, most organizations have plans that represent large “bibles” of procedures and policies that sit on bookshelves (get dusty) and are rarely read or followed. Most managers and directors outside of IT fail to comprehend the complexity of information security. It is not simply the purchase of a new device or the implementation of a new password procedure, but rather an ongoing culture within the organization that reenforces security awareness and security protocols. Information and computer security must be a part of the everyday working culture of the organization. The information security plan is not just a written compendium of policies and procedures, but a living document that guides the organizational methodology for providing and improving information security. In this manner, the development of a specific risk analysis plan never really ends. New software and equipment acquisitions, new technologies, and new system requirements demand constant change and “tweaking” of the overall system, and hence, the security of the system as well. Similar to or part of an information resource management (IRM) plan, risk analysis highlights synergism and cooperativeness as well as overall improvement in information exchange through the use of new and varied technology and work processes. 16 This cannot be a haphazard experiment; it must be a well-thought-out plan implemented through a series of phases on a timely basis. Without such objectives in mind, existing IT resources flounder and security breaches proliferate. It should go without saying that constant training improves security knowledge and engenders security awareness within an organization. Training must accompany the development of any risk analysis or information security plan.
Executives must also be aware of a growing problem existing within information system security. Most executives are not highly information and computer technology literate. As such, they must rely on the technical advice and expertise of subordinates. This situation makes the executive vulnerable to “information security elites.” Recognized and identified by a number of scholars during the technology boom of the 1980s, information security elites often gain control over others (influence) and resist control by supervisors. 17 This is often accomplished by occupying key positions whose competencies are essential to the overall success of the organization. In some cases, executives are almost held “hostage” by those who have the niche technical knowledge to understand specific information security systems. Information is critically important to an organization. Individual managers and personnel must have immediate access to secure and correct data as well as communication systems that link them to a global marketplace. This demand places the highest value on those who can assure both the quality and security of data and the information system. It also creates a potential environment for significant security loss and abuse. Who is monitoring those in charge of information security? For this reason, external reviews have significant advantages over internal assessments, particularly in the area of risk analysis.
The second important aspect of risk analysis is to identify the threats facing an organization. The single largest threat to an organization and its information security is from within. Many times, organizations suffer from key individuals intentionally stealing information or corrupting files. The vulnerability of the organization is extreme and as such, so is the loss. Additionally, in many instance of information security breaches from within, there is virtually no detection of an incident that has occurred. Many times, information is accessed, altered, stolen, or sabotaged without the organizational victim’s knowledge—either the crime is covered up through the use of special programs or simply not detected in an audit.
Other than routine personnel and employment checks, incorporating solid hiring practices, and constant monitoring, very little can be accomplished to eliminate totally this type of threat. The most pernicious part of this threat is that it invariably leads executives not to trust their subordinates. It takes just one relatively minor problem from within for an executive to realize just how vulnerable the organization really is from this type of attack. Paranoia can become a problem; the breakdown of trust between executives and managers causes other significant personnel problems. Once again, frank awareness and training can help to instill the type of organizational culture required in today’s information society.
Formal risk analysis also categorizes external threats with great detail and elaborate terms, especially after the events of September 11, 2001. Terrorism, sabotage, espionage, and criminal theft are real and comprise a bulk of the identified threats to an organization. Just as dangerous (and threatening) are the age-old issues presented by natural disasters, hurricanes, tornadoes, and earthquakes. Effective risk analysis should cover these types of standard threats as well. Interestingly, preparing for natural disasters often provides the type of secure information environment that thwarts insider problems and external attacks by people.
We are concerned primarily with risks and vulnerabilities to information systems and, as such, will focus our discussion pertaining to problems associated with these types of systems. These are usually summarized in terms of threats to information: integrity, authenticity, confidentiality, and availability. Threats to integrity are threats that actually alter data. Adding, moving, and deleting all change the integrity of data. These actions are also legitimate uses of data. The risk is that an unauthorized or unintentional action will cause a threat to integrity. To make matters more difficult, threats to integrity are not limited to malicious actions. Unintentional alteration (e.g., deleting a useful file) is far more common that malicious activity. The recycling bin found in Microsoft™ systems is a risk management tool to avoid such unintentional deletions. It trades hard drive space and annoying pop-up dialogs, asking you to be sure before you delete, for a second chance to recover a file.
Box 13.1. Complete Security Is Not the Complete Solution
Computer experts have long recognized that it is possible to make a computer completely secure. The steps are as follows: (1) unplug the network cable, (2) encrypt the data with strong encryption, (3) delete the encryption key and forget the password, (4) unplug the power, (5) lock the computer in a vault, and, finally, (6) cover the vault with concrete. This security procedure also makes the computer perfectly useless. For an asset to be worth protecting, it must have value. Most assets have value in the work they do for the business.
Authenticity is a more subtle concept, but it is related to integrity. In these terms, authenticity is the ability to trust the integrity of data. In broader terms, authenticity is the justified ability to trust. Data that cannot be verified or trusted are less useful than authenticated data, if not completely useless. Further, if an intruder is detected in the network, data may have perfect integrity (i.e., be unaltered), but if authenticity cannot be established, the organization cannot verify that fact. The data may as well be altered.
Confidentiality acknowledges that some information is more valuable if it is not publicly available. Confidentiality is treated more extensively in the section on encryption (below).
The final broad category of threat to information is availability. A denial-of-service attack can make information unavailable, but so can overly rigid security policies. What is the difference to the legitimate user who needs the information? Information systems exist to store, manipulate, and utilize data. When data are unavailable, the system’s fundamental purpose is disturbed. Each of these categories of threat must be balanced against each other in an attempt to manage risk.
Risk management is predicated on the rational calculation of potential damage against certain damage of non-mission-related expenditure. The concept of risk management acknowledges that while it may be possible virtually to eliminate all risks, the resources expended to do so would greatly outweigh the potential gains.
With this in mind, computer security experts seek to optimize the balance between the intended function of the information asset (network, computer, etc.) and securing the asset from risk. Many strategies of risk management overlap. For example, protecting a system from natural disaster by maintaining a decentralized network also protects the system from local network outages or denial-of-service attacks on a single point of failure.
The primary purposes of risk analysis are to identify threats and then to provide recommendations to address these threats. Concerning information systems and technologies, the focus has historically been on protecting individual components such as software, hardware, and connected devices. While a hard drive can be replaced, the information contained on the hard drive may be irreplaceable, and its loss may be catastrophic to an organization. Here, more than in almost any other area of risk analysis and information security, an ounce of prevention is truly worth a pound of cure! Coupled with solid personnel hiring practices and an aware organizational culture, the following security technologies provide a sound base for information systems and networks. 18
Backups are the single most important security measure a company or individual can take. As noted above, most of the value of an information system lies in its information. In the event that threats to integrity, authenticity, or availability cannot be prevented, the information can survive risk actualization if a backup copy exists in a safe location. A backup is a copy of data. If data are lost, destroyed, or altered, the backup may provide the only way to recover the data. If there is reason to suspect that the integrity of the data was compromised, a secure backup provides a basis of comparison. If the suspect data match the backup (plus any expected changes), integrity has been restored. If alteration is evident, then the harm from compromised data has been isolated.
Mission-critical data (data that cannot be lost) is often protected with an instantaneous backup as it is stored. A device called a redundant array of inexpensive disks (RAID) saves the data to two hard drives at once, making failure of a single drive less damaging. 19 Since RAID only makes two copies of the hard drive, regular backups are also necessary to protect against alteration.
Wireless Networks and Security
With the emergence of laptop and mobile computing has come the growth of wireless Internet access points, accessible by Wi-Fi connections. Wi-Fi connections are often synonymous with wireless local area networks, where machines can be connected to this network through the use of wireless network adapter cards. 20 Wireless networks are an especially vulnerable part of a network, as they can allow individuals inside of a network boundary if they are not secured. Thus, there are several steps recommended to secure wireless networks. Specifically, wireless routers and connections must be secured through the use of an encryption protocol like wired equivalent privacy (WEP) or Wi-Fi protected access (WPA). 21 These tools help to reduce the likelihood of theft of service or unintended access, as individuals will be unable to gain access to the network without the proper passphrase or network key. Wireless signal beacons that make individuals aware of the presence of a network can also be turned off. This will reduce the likelihood of individuals who attempt to compromise the network. There are also protocols that enable wireless routers to allow only known machines to access the network. This is accomplished through media access control, or MAC, address filtering where the unique MAC address of each machine is kept on file, and those MAC addresses that are not recognized are unable to connect to the network. 22 Sensitive information and databases must also be secured and kept off of the wireless network in order to minimize the likelihood of an intruder gaining access. It is important to note, however, that wireless access points are extremely vulnerable to attacks, thus they can be a significant liability to network security. 23
Box 13.2. Case Study in Risk Analysis—Backup Schedules
Although there is no rule as to how often backups should be created, a good rule of thumb is to consider how much of the data you can afford to reproduce. When you exceed that point, create a backup. For example, a company pays people to enter data. A backup system capable of handling their demands costs $12,000. They have 120 employees making $10 per hour. They create a backup every 10 hours (120 × 10 × 10 = 12,000). The benefit is that they only pay for the backup system once, with small recurring costs for tape and a portion of an administrator’s time. If they recover data from a ten-hour shift once, the system has paid for itself. If they suspect an intruder has tampered with the data, they can simply compare the data to a backup and restore the good data if necessary.
Another way individuals and organizations can protect their computer networks is by using firewalls. A firewall is a device or software that acts as a checkpoint between a network or stand-alone computer and the Internet. A firewall checks all data coming in and going out. If the data do not fit strict rules, they do not go through. In the past, firewalls were not widely used, since they could significantly reduce access from the protected network to the outside world. Firewalls have since improved greatly and are now relatively fast and easily configured by the user. Software development, combined with rapidly increasing computer-based crime occurring through interconnected networks, has led to the widespread use of firewalls.
A firewall is a tool to be used as part of a security strategy. It cannot protect a user from all threats. In fact, firewalls are a classic example of risk analysis. If the rules set in a firewall are too restrictive (i.e., more secure), then normal network functions may be impaired. Such impairment may make it impossible for users to conduct business—the whole reason for the network in the first place. If the firewall rule set is not restrictive enough, it may expose the network to an intruder. This critical balance is the primary problem with firewalls. A special administrator is often necessary for large corporate firewalls. This administrator constantly balances security versus business efficiency to configure the firewall rule set. New applications or new work zones force new rules for the firewall. For example, a sales force in the field may need access to the corporate network. The firewall must allow these users to penetrate the network while excluding unknown users. One solution (see below under “Encryption”) is to allow a virtual private network (VPN) using encrypted data. Many early VPNs had difficulty penetrating firewalls because the firewall could not read the data headers (directions on how to deliver the data).
Perimeter and Host-Based Firewalls
The standard, corporate Internet firewall places a single barrier between the internal network and potential attackers. Servers that require public access (e.g., Web servers or mail servers) are placed outside the firewall to minimize penetration from the outside. Such servers, called “bastion” servers, have their own security measures. Like the curtain wall of a medieval castle, the firewall protects the interior, while towers, or bastions, provide hard defense points for contact with a threatening environment. This configuration places all of the defenses at the perimeter. If a malicious user attacked the network from within the “secure” area, there are no defenses. On the other hand, a single dedicated firewall can be built to handle a large volume of traffic without slowing the network.
A second layer of security can be provided with host-resident firewalls installed on machines within the secure perimeter. Host-resident firewalls are software programs installed on a computer. Clearly, there is an additional burden on every computer running such a program. There is also an additional cost for software and burden in administering host-resident firewalls. There is no requirement that all computers within a perimeter have a host-resident firewall. Particular resources within the secure perimeter may require additional protection. Using both types of firewall allows fine-grained control over network security. This means greater flexibility to manage risks. For example, a perimeter firewall might have to allow traffic into the perimeter due to use needs, but a host-resident wall could restrict such traffic on all computers except those that need access. Additionally, antivirus programs and other protective software should be installed on machines as a further layer of protection against attack.
One of the most basic functions of a firewall is to block certain traffic that may be harmful. One of the protocols that run the Internet, transfer control protocol and Internet protocol (TCP/IP), assigns numbers to computers called Internet protocol addresses (IP addresses). TCP assigns “ports” to each computer that allow different programs to communicate. 24 Imagine that each computer on the Internet is a hotel. Each hotel has about 65,000 rooms. The IP portion of TCP/IP identifies each hotel with a street address so that traffic can find it (IP address). The TCP portion of TCP/IP assigns room numbers so that traffic can go to the right place. Most of the higher rooms are not regularly used. Packet filtering allows a firewall to block traffic from a known bad location (a scruffy hotel in a bad neighborhood). It also allows the firewall to block traffic to a room that should be unused (which may indicate an unwanted “guest” in the hotel). The addressing information that is absolutely necessary for Internet communication is used to block potential threats. By using a firewall to block unused ports, many security threats can be eliminated. If the filtering rules are too aggressive, beneficial traffic may be blocked. Again, a security policy must carefully weigh the need for legitimate service versus security.
Stateful inspection relies on another feature of TCP. To travel across the Internet, data are broken into smaller chunks called packets. IP addresses packets to find the right computer using the IP address of that computer. Each IP packet is an independent agent, finding its way across the Internet with the best guess of the shortest route. Sometimes packets take different routes and arrive out of order. These must be reassembled into the proper order before the computer can use them.
In addition to labeling ports, TCP reassembles packets by tracking sessions (a series of exchanges between computers). The TCP portion of each IP packet contains a sequence number that is based on a pseudorandom process negotiated between the two computers communicating in the session. Stateful inspection monitors these sequence numbers and other information to make sure that malicious data don’t enter into a stream of legitimate data. This is most frequently done by denying an external attempt to open a session with a computer that is not a server. Packet filtering cannot block all traffic (or else the computer may as well not be connected to the Internet), but stateful inspection can be used to ensure that only information request from within the perimeter can enter.
Network Address Translation (NAT)
Although technically not a security function, many firewalls allow administrators to use special reserved Internet addresses. All the machines inside the perimeter have these special addresses. These special addresses are not allowed on the Internet. To use Internet services, the internal machines start a session with the firewall server. The firewall uses NAT to assign an IP to that session, not to the internal machine. Going back to the hotel analogy, imagine a desk clerk with mailboxes. Data traffic goes to the mailbox and is then delivered to the guest. That way, no junk mail gets through; in this case, junk mail would be specially crafted attacks against an internal computer designed to bypass firewalls. A firewall with NAT stops these attacks because it moderates all communication from inside to the outside.
Air gaps are the ultimate in firewall network protection. An air gap is literally a separation between computers containing nothing but air. That is, the machine is not on the network. Data have to be carried back and forth through the air gap by a person. The benefit to this is that no outsider can ever access the air-gapped machine. The downside is that no one on the inside can reach the air-gapped machine through the network. Air gaps are typically only used with the most sensitive data. Data carried to the air-gapped machine may contain malicious code like a virus, but the air-gapped machine is 100 percent immune to network attacks. Unfortunately many security equipment vendors label devices as air gaps when they actually connect to the network; these are “virtual air gaps.” Absolute protection only occurs when there is no electrical connection—virtual air gaps do not fulfill this requirement.
Limitations of Firewalls
Firewalls are powerful and increasingly versatile tools for managing risks on a computer network; however, they are not the single solution to network security. Firewalls have to be managed to allow for growth in network function while maintaining the highest level of protection. This requires interaction between the firewall administrator and other users of the network. In large organizations, this interaction is formalized in a security policy. A security policy requires updates as situations change, and it requires auditing to ensure that users and management are complying with the policy. The security policy tells the firewall administrator how to balance risks when configuring the firewall. It also allows the security administrator to plan other layers of security, if necessary, to cover the holes in the firewall made necessary by legitimate network functions.
Firewalls cannot stop improper use of legitimate services. This is why bastion servers are placed outside the secure perimeter of the network. Servers exist to provide interaction with people and systems outside the company; however, many network attacks exploit flaws in server software to gain unfettered access to the computer running the server software. If the server was inside the firewall, such malicious traffic could enter the secure perimeter. If the server is compromised, then the attacker would have a base of operations to explore the inside network.
Firewalls exist as part of an overall security strategy. They are not magical talismans that protect the network from all harm. Many users believe that the firewall will protect them; however, as we have seen in recent e-mail-based viruses and worms (see Chapter 4), this is not always so. Users surfing the Web from inside a firewall are still vulnerable to exploits written into the Web pages they access. There are remedies for these security risks not handled by firewalls, but they all exist as part of an overall security strategy.
Bruce Schneier defines cryptography as “the art and science of securing messages.” 25 Messages can be any data. Encryption is a technique of securing data by scrambling the data into apparent nonsense, but doing so in such a way that the message can be recovered by a person possessing a secret code called a key. The requirements for a good encryption scheme include protecting the classic elements of computer security discussed above: authenticity, integrity, and confidentiality. Availability is generally not facilitated by encryption. Encrypting data has the inherent effect of confidentiality. This comes into play most often when confidential data must be sent through an insecure medium. An encrypted message (i.e., the scrambled message) can be stored or transmitted to another point with a reasonable expectation of security—even if the medium used to transmit it is not secure (e.g., the Internet). Integrity can be assured through a similar technique called hashing. Hashing produces a unique signature of the original data—like a fingerprint. At the other end of the transmission, a new hash is calculated on the data and compared to the old hash. If they match, the data have not been altered. A public key/private key system allows a user to authenticate data by matching a key—the only way to decode the data—with a well-known and publicly available key. This form of security is only as good as the secrecy of the key, but it offers a way to authenticate without being physically present (more about public key/private key systems below). Using these three techniques together provides a reasonable expectation that the message is private and unaltered.
To illustrate this process, imagine a check being mailed to a creditor. You create a transaction by filling in an amount to a preformatted check with all of your bank information already on it. You also fill in the receiver’s name. You authenticate the instructions to your bank by signing the check. If there is any question as to who wrote the check, your signature can be checked against a signature card you filed when you opened the account. You place the check in an envelope to protect the confidentiality of your transaction. You then turn it over to the U.S. Postal Service and they deliver it. Evidence of tampering should be noticeable by the condition of the envelope. The elements of this transaction are as follows: authenticity—signature; confidentiality—envelope; and integrity—envelope. To conduct the same transaction over the Internet, you face different risks, but you still have tools to assist you.
To illustrate the online process, imagine that instead of writing and mailing a check, you log onto the bill payment section of your creditor’s Web site. You enter your confidential bank information. Since you cannot sign a computer screen, you provide information that only you should know—a password. This password serves as your signature. 26 In the same way that your signature was verified by your presence in a bank with identification documents, your password is verified with identifying information when you start to do business with the online entity. Since only information is being exchanged, not a physical object like a signature on paper, confidentiality of your identifying information is imperative. The authenticity of your password is directly related to its confidentiality because anyone with the password would appear to be you. Both this information and the transaction details are protected with encryption. Finally, the integrity of the transaction details is guaranteed with a hash that accompanies the encrypted message. It is possible, although usually pointless, to blindly alter an encrypted message—a process called bit-flipping. A hash calculates a unique mathematical value for a message. When the message reaches its destination and is decrypted, a new hash is calculated. Any alteration of the message contents will cause the hashes not to match. The elements of this transaction are as follows: authenticity—password; confidentiality—encryption; and integrity—hash value. The transaction can be carried by the insecure Internet and retain a reasonable expectation of security.
A “reasonable expectation of security” does not mean absolute security. All encryption can be broken. “If the cost required to break an [encryption technique] is greater than the value of the encrypted data, it is probably safe.” 27 The most essential feature of encryption is that scrambled data can be returned to a useful form by the data’s intended user and cannot easily be returned to a useful form by others. Again, the concept of risk analysis can be applied to this security technique. The more valuable the information transmitted or the more determined a third party is to intercept the information; the more resources must be devoted to protecting it.
Encryption is used to store passwords on your home computer system and credit card numbers in “online wallets” and to secure e-commerce transactions. Modern encryption can be broken, but except for the most basic forms of encryption, like the encryption protecting Windows™ passwords, most criminals would find it immensely easier to look for other methods to get the data. While e-commerce may be protected as it crosses the Internet, credit information is often stored on unsecured computers at the merchant’s site. It is important to remember that encrypted e-commerce data are decrypted at the other end. While e-commerce transactions are generally safe, few merchants have devoted the resources to securing their whole computer system.
Using Public Key/Private Key Encryption to Enhance Confidentiality
A public key and private key are parts of a matched pair of values. That is, they are mathematically related in such a way that no other value could be substituted for one of them. It is also impossible to predict the value of one based on the other. Thus, having access to a public key will not help an attacker (a person attempting to break the encryption). When a message is encrypted with one key, only the other can decrypt it. Thus, having access to a public key will not help an attacker decrypt a message encrypted with the public key. The idea of the key pair is that one key can be freely distributed while not compromising the other. When a key pair is generated, either one may become the public key. That does not need to be determined until it is actually distributed. Finally, having a document encrypted with either one of the keys does not necessarily help an attacker guess the key value used to encrypt it.
Public keys are often registered with a public key authority. Anyone wishing to send a confidential message to the key owner can use the public key to encrypt the message.
The resulting encrypted document can be sent across an insecure channel, like the Internet. The receiver simply uses the private key associated with the well-known public key to decrypt the message.
A malicious user intercepting the encrypted message cannot successfully decrypt the message with the public key.
The process of encrypting a document simply assures the recipient of the confidentiality of the document’s contents. 28
Using Digital Signatures to Enhance Authenticity and Integrity
Another application of the public key/private key pair allows a user to authenticate a document with a “digital signature.” Digital signatures use a hashing algorithm to create unique numeric value associated with the document. This value has two primary attributes. One, if the unaltered document is hashed again, it will produce the same hash value, and two, if the document is altered in any way, it will not produce the same value if hashed again. The problem with this method is that if the unencrypted original document is intercepted, it can be altered and a new hash value can be calculated based on the altered document and substituted for the old hash value. This means that the hash must be sent through a separate, secure mechanism or it must be cryptographically protected.
This is where the second step of the digital signature comes in. Once the hash value is calculated, it is encrypted with the sender’s private key. Recall that having a document encrypted with the private key does not necessarily assist an attacker in trying to break the encryption or guess the key. To authenticate that the person claiming to have sent the message actually sent it, the signature is decrypted with the sender’s public key. The sender’s public key is known to be associated with the private key supposedly used to encrypt the message. If the signature decrypts properly, then the receiver knows that someone in possession of the supposed sender’s private key sent the message; this is assumed to be the supposed sender. 29
Using Encryption in Daily Life
Most users do not directly use tools that provide public key/private key encryption. Pretty Good Privacy (PGP) is one example of a common encryption package. It is commonly used to encrypt e-mail messages. In spite of the relative rarity of such encryption packages, most computer users today use encryption without knowing it. Web browsers have built-in encryption functions. Secure Web pages, such as shop cart checkout pages in e-commerce sites or any other secure information transfer page; use encryption to protect the transaction. This leads to an interesting problem with public key/private key encryption: It is very slow by computer standards. It is fine for small text files or very light graphics, but for wholesale secure transfer of information, it is simply inadequate. Symmetric key encryption is much faster, but it requires both parties to the transaction to have the same key; the key cannot be left unprotected to provide convenient authentication. The solution to this problem uses a hybrid of both symmetric key and public key/private key technology. A temporary symmetric key called a session key is generated for each data transfer “session.” It is sent using a public key/private key exchange. The session key is simply information to be encrypted. Once both parties have the key, fast exchange of secure information is possible. As soon as the session is done, the session key can be discarded.
Encryption and hashing can be part of a risk management strategy to reduce threats to data authenticity, integrity, and confidentiality. The particular techniques discussed only indicate the possible uses of encryption. For example, a technology called IPsec (secure Internet protocol) utilizes cryptography to reduce threats to authenticity, integrity, and confidentiality of data as they travels on a network. IPsec is used to create secure cryptographic “tunnels” across the Internet so that eavesdroppers do not know where a packet is going after its next stop or what it contains. Without IPsec or similar measures, information traversing the Internet is vulnerable to even relatively unsophisticated eavesdropping attempts. 30 This and many other uses of cryptographic technology provide options to manage threats.
The single greatest problem in computer security is password protection. Although there are some basic do’s and don’ts, there are also sophisticated software programs that address the issue. Several approaches have been taken, including password-creation software, onetime password generators, and user authentication systems—like biometric devices. There is a variety of software programs that system administrators can use to in order to improve password security. Some programs force users to change their passwords on a regular basis, perhaps every month or few months, or even every week. Other programs automatically create random pronounceable passwords for users, such as “jrk^wud,” which is pronounced “jerk wood.” The user remembers that the ^ character is between two words. Such pronounceable passwords are subject to dictionary attacks by hackers, are easily remembered by users, and do not relate to user information (such as a child’s first name or user Social Security number) that might be easily determined by an intruder. Other programs may force users to incorporate numbers, letters, and symbols into their passwords, making it more difficult to engage in dictionary attacks. When combinations of these protocols are in place, passwords are made even stronger, thereby increasing the overall strength of the network.
Security Vendor Technologies
SecurID, from Security Dynamics Technologies, Inc. is perhaps the most popular onetime password generator, with over three million users in 5,000 organizations worldwide. SecurID identifies and authenticates each individual user on the basis of two factors: (1) something secret that user knows—a memorized personal identification number (PIN)—and (2) something unique that the user physically holds—the SecurID card. Under this system, a computer user, when logging on, first types in his or her PIN. Then the user types in the number currently displayed on his or her SecurID card, which changes every 60 seconds. Each individual SecurID card is synchronized either with hardware or software on the computer system that the user is attempting to access. The result is a unique access code that is valid only for a particular user during a one-minute time period.
Kerberos is a program developed at MIT by the Athena Project. It is a leading network and data encryption system. Cygnus Support, a Mountain View, California company, has developed Kerberos-based user-authentication software—Cygnus Network Security (CNS)—that eliminates the need to use clear, unencrypted text passwords on a network. In this system, an individual user is given an encryption key to encrypt and decrypt Kerleros passwords, log-ins, and other computer system transactions. When this individual wants to access the network, he or she will send a message to the Kerberos server. This computer sends back an encrypted package that can be read only with that user’s secret key. This package also includes a temporary encryption key good for only that session on the computer. To prove his or her identity, the user then sends a message coded in the temporary encryption key back to the computer. The Kerberos computer then acknowledges the user’s identity by sending a second encrypted message to him or her, which can be decoded only by using the temporary encryption key previously sent to him or her.
Kerberos is just one program that utilizes encryption and password technologies. Many other programs also create encrypted sessions between the user’s computer and the destination computer, thus protecting passwords and other sensitive information from cybercriminals and -terrorists.
Symantec, the maker of Norton Utilities, is also a world leader in information security technologies. Focusing primarily on Internet security issues, Symantec offers a variety of security monitoring plans, management services, best practices, proactive protection methodologies, and educational services for enterprises and home users. In a recent white paper distributed by Symantec, the authors eloquently point out that the Internet was developed primarily as an unregulated, open architecture. 31 This is an ideal environment for crime and terrorism where simply developed passwords may not be enough.
New technologies are altering the face of user identification. The use of tools such as digital fingerprint identification, retinal identification, and voice recognition greatly increases the accuracy of user identification, sometimes replacing passwords. In fact, a range of computers from various vendors are including some form of biometrics into the computer system, through the use of built-in cameras or fingerprint scanners. These features increase computer security by linking a user’s unique physical attributes (those that do not change or cannot be easily altered) to known passwords or stored encryption keys. Biometrics do not provide absolute security; they make it increasingly difficult for intruders to guess passwords by adding another level of complexity to the password, but not making it harder for the user to remember the password. A retinal scan and a short password are effectively a 10,000–20,000–digit password. The longer the password is, the harder it is to guess.
Cyber attacks have also focused on the home computer user. For instance, botnets infect and incorporate home computers into a broader network of compromised machines that can be used as a launch point for DDoS attacks, phishing, and spam distribution. The Conficker worm, for example, was discovered in November 2008 on a number of computers around the world. This worm spread over the Internet through vulnerabilities in Windows servers and systems and reportedly infected over nine million computers within a four-month period. 32 This worm is also a blended threat, as it propagates in a variety of ways and can be used to kill processes within the system and attack multiple components in any system. 33
As a result, home users must also take computer security very seriously. It is important to note, however, that the use of antiviral programs is not enough to provide total protection for a system. Almost 25 percent of personal computers around the world that use a variety of security solutions have malicious software loaded into their memory, compared with 33.28 percent of unprotected systems. 34 Thus, many computers and individuals can be victimized despite the presence and use of antivirus and other protective software programs. Aside from the routine backups and use of antiviral software, home users must periodically conduct maintenance on their machines. Using the analogy of a car: A person buys a car and then performs routine maintenance such as oil changes, brake replacement, tune-ups, and the like as the automobile is used. Similarly, people who purchase new personal computers need to conduct routine maintenance on their machines by updating their antiviral software on a regular basis, checking reports from their operating system Web site, and repairing software glitches through provided “patches” offered by a number of credible sources.
The personal computer is much more like an automobile that needs constant care and upkeep, rather than an appliance like a refrigerator that is simply plugged into the wall and will continue to function without much maintenance. For the most part, this is a user-awareness problem demanding a significant shift in attitude and philosophy on the part of literally billions of home users. Fortunately, all of the major operating system companies (e.g., Microsoft™, Linux™) provide well-developed Web sites that offer the latest information on problems and remedies. Software fixes and patches often require simply a stroke of the key to download a specific file from the Web site. Then too, the Computer Emergency Response Team (CERT®) at Carnegie Mellon University maintains a well-developed Web site that tracks and reports trends in computer viruses. 35 Established in 1988, the CERT® Coordination Center (CERT/CC) is a center of Internet security expertise, located at the Software Engineering Institute, a federally funded research and development center operated by Carnegie Mellon University. This site should be regularly visited by all computer users.
We live in the “third wave” of the information society, and undoubtedly, abuses in information security resulting in computercrime and cyberterrorism will only grow in the future. Unfortunately, the losses suffered directly from these types of abuses may be only a part of overall economic devastation facing organizations. Indeed, the civil litigation resulting from loss of privacy, denial or loss of service from partner-type corporations, and loss of reputation and key executives may comprise a much more lasting and severe economic loss to victims of computer crime and cyberterrorism. In addition, the individual committing these attacks will no longer be the relatively uneducated “crook” of the past. He or she may well be a very sophisticated criminal, a greedy inside employee, or a highly motivated terrorist. To be sure, the attacker will be attuned to the intricacies of computer databases and the means to defeat information security systems within closed networks and open architecture systems like the Internet.
Risk analysis offers a continual strategy to assess and evaluate current systems and potential threats. The recommendations offered by risk analysis exercises often focus on the development of an aware organizational culture as well as the use of various security technologies: backup and redundant file systems, firewalls, encryption, and the use of passwords. There are no guarantees or immunities. Developing such programs only offers a first line of defense, as the computer hacker, vandal, or cyberterrorist is just one step behind in his or her ability to defeat the latest security measure. Information system security must be flexible, dynamic, and ever improving, characterized by an organizational culture that not only appreciates the issues associated with information security, but encourages and supports the types of work processes, upfront expenses, and awareness training consistent with change.
1. What are the major principles of risk analysis? List the common steps in developing a risk analysis strategy.
2. Define the following terms in relation to information security: integrity, authenticity, confidentiality, and availability.
3. What is a firewall? Identify and explain some of the functions of a firewall.
4. What is TCP/IP?
5. What are the limitations of a firewall?
6. What is encryption and hashing, and how are they used to secure data and information?
7. How does a digital signature enhance authenticity and integrity of data?
8. How are new security technologies altering the face of user identification?
1HOLLINGER, R .C. (1997). Crime, Deviance, and the Computer. Brookfield, VT: Dartmouth Publishing Company.
2LEVY, S. (1984). Hackers: Heroes of the Computer Revolution. New York: Dell Publishing; and HAFNER, K., and LYON, M. (1996). Where Wizards Stay Up Late: The Origins of the Internet. New York: Touchstone.
3A fire in the data center was the ultimate disaster. Spraying water onto running computers would destroy them as surely as fires. A “dry” chemical called Halon™ and later CO2 fire suppression systems were, and still are, used to protect data centers.
4Computers generate large amounts of heat. Heat adversely affects the operation of computers when it builds up. Even the earliest computer required massive refrigeration systems and special floor-vent air conditioning systems. Failure of the air conditioner meant failure of the computer.
5BLOOMBECKER, B. (1990). Spectacular Computer Crimes: What They Are and How They Cost American Business Half a Billion Dollars a Year. Homewood, IL: Dow Jones-Irwin; and PARKER, D.B. (1976). Crime by Computer. New York: Scribner.
7LOPER, D.K. (2000). “The Criminology of Computer Hackers: A Qualitative and Quantitative Analysis.” Dissertation Abstracts International 61(8): AAT 9985422.
8HAFNER, K., and MARKOFF, J. (1991). Cyberpunk: Outlaws and Hackers on the Computer Frontier. New York: Touchstone; SLATALLA, M., and QUITTNER, J. (1995). Masters of Deception: The Gang that Ruled Cyberspace. New York: Harper Collins Publishers; LITTMAN, J., and DONALD, R. (1997). The Watchman: The Twisted Life and Crimes of Serial Hacker Kevin Poulsen. Boston, MA: Little, Brown and Company; LITTMAN, J. (1996). The Fugitive Game: Online with Kevin Mitnick. New York: Little, Brown and Company; MUNGO, J., and CLOUGH, S. (1992). Approaching Zero: The Extraordinary Underworld of Hackers, Phreakers, Virus Writers, and Keyboard Criminals. New York: Random House.
9SLATALLA and QUITTNER, Masters of Deception.
10HOLLINGER, R.C., and LANZA-KADUCE, L. (1988). “The Process of Criminalization: The Case of Computer Crime Laws.” Criminology 26: 101–126.
11HAFNER and LYONS, Where Wizards Stay Up Late.
12COMER, D.E. (1997). The Internet Book, 2nd ed. Upper Saddle River, NJ: Prentice Hall Inc.; and BAASE, S. (2003). A Gift of Fire: Social, Legal, and Ethical Issues in Computing, 2nd ed. Upper Saddle River, NJ: Prentice Hall Inc.
13HOWARD, J.D. (1997). “An Analysis of Security Incidents on the Internet 1989–1995.” (Doctoral dissertation, Carnegie Mellon University, 1997). Retrieved July 13, 1999 from the World Wide Web: https://www.cert.org/research/JHThesis/; and LOTTOR, M. (January 1992). Internet growth (1981–1991) (Request for Comments 1296). Menlo Park, CA: SRI. Retrieved July 13, 1999 from the World Wide Web: https://www.nw.com/zone/rfc1296.txt
14HAFNER, K. (2001). The Well: A Story of Love, Death and Real Life in the Seminal On line Community. New York: Carroll & Graf.
15Free Software Foundation (2001). Why We Exist. Retrieved March 15, 2001 from the World Wide Web: https://www.gnu.org/philosophy/philosophy.html
16The concept of an IRM plan was first introduced in the 1980s, as network systems first emerged. See CORBIN, D. (May 1988). “Strategic IRM Plan: User Involvement Spells Success.” Journal of Systems Management 39(5): 12–16.
17See KRAEMER, J., and DANZIGER, K. (January/February 1984). “Computers and Control in the Work Environment.” Public Administration Review 44: 32–42; and TAYLOR, R.W. (1989). “Managing Police Information.” In D.J. KENNEY (ed.), Police and Policing: Contemporary Readings. New York: Praeger Press.
18Much of this section has been adapted from TAYLOR, R.W., and LOPER, D.K. (2003). “Computer Crime.” In C.R. SWANSON, N. CHAMELIN, and L. TERRITO (eds.), Criminal Investigation, 8th ed. New York: McGraw-Hill. See also SWANSON, C.R., CHAMELIN, N., TERRITO, L., and TAYLOR, R. (2009). Criminal Investigation, 10th ed. New York: McGraw-Hill.
19RAID comes in many schemes. For other RAID configurations, see https://www.whatis.com
20VACCA, JOHN R. (2006). Guide to Wireless Network Security. New York: Springer.
24STEVENS, W.R. (1994). TCP/IP illustrated: The protocols (vol. 1). New York: Addison-Wesley.
25SCHNEIER, B. (1996). Applied Cryptography: Protocols, Algorithms, and Source Code in C, 2nd ed. New York: John Wiley & Sons.
26There are many other methods of on line authentication, including third-party trust-based relationships, token-based authentication, and challenge hand-shake mechanisms, but simple passwords (preshared secrets) illustrate this point well enough.
27SCHNEIER, Applied Cryptography.
30KAUFMAN, E., and NEWMAN, A. (1999). Implementing IPsec: Making Security Work on VPNs, Intranets, and Extranets. New York: John Wiley & Sons Inc.
31GORDON, S., and FORD, R. (2003). “Cyberterrorism?” A white paper distributed by Symantec Corporation, Cupertino, California.
32HIGGINS, KELLY JACKSON. (2009). “Storm Botnet Makes a Comback.” Dark Reading. https://www.darkreading.com/security/vulnerabilities/showArticle.jhtml?articleID=212900543
34PandaLabs (2007). “Malware Infections in Protected Systems.” Retrieved November 1, 2007 from https://research.pandasecurity.com/blogs/images/wp_pb_malware_infections_in_protected_systems.pdf
Digital Crime and Digital Terrorism, Second Edition
Chapter 13: Information Security and Infrastructure Protection
ISBN: 9780137008773 Authors: Robert W. Taylor , Eric J. Fritsch , John Liederbach , Thomas J. Holt
Copyright © Pearson Education (2011)
Place an order in 3 easy steps. Takes less than 5 mins.