In a change-controlled environment, which of the following is MOST likely to lead to unauthorized changes to
production programs?
Modifying source code without approval
Promoting programs to production without approval
Developers checking out source code without approval
Developers using Rapid Application Development (RAD) methodologies without approval
In a change-controlled environment, the activity that is most likely to lead to unauthorized changes to production programs is promoting programs to production without approval. A change-controlled environment is an environment that follows a specific process or a procedure for managing and tracking the changes to the hardware and software components of a system or a network, such as the configuration, the functionality, or the security of the system or the network. A change-controlled environment can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. A change-controlled environment can involve various steps and roles, such as:
Promoting programs to production without approval is the activity that is most likely to lead to unauthorized changes to production programs, as it violates the change-controlled environment process and procedure, and it introduces potential risks or issues to the system or the network. Promoting programs to production without approval means that the code or the program of the system or the network is moved or transferred from the development or the testing environment to the production or the operational environment, without obtaining the necessary or the sufficient authorization or consent from the relevant or the responsible parties, such as the change manager, the change review board, or the change advisory board. Promoting programs to production without approval can lead to unauthorized changes to production programs, as it can result in the following consequences:
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
From a security perspective, which of the following assumptions MUST be made about input to an
application?
It is tested
It is logged
It is verified
It is untrusted
From a security perspective, the assumption that must be made about input to an application is that it is untrusted. Untrusted input is any data or information that is provided by an external or an unknown source, such as a user, a client, a network, or a file, and that is not validated or verified by the application before being processed or used by the application. Untrusted input can pose a serious security risk for the application, as it can contain or introduce malicious or harmful content or commands, such as malware, viruses, worms, trojans, or SQL injection, that can compromise or damage the confidentiality, the integrity, or the availability of the application, or the data or the systems that are connected to the application. Therefore, from a security perspective, the assumption that must be made about input to an application is that it is untrusted, and that it should be treated with caution and suspicion, and that it should be subjected to various security controls or mechanisms, such as input validation, input sanitization, input filtering, or input encoding, before being processed or used by the application. Input validation is the process or the technique of checking or verifying that the input meets the expected or the required format, type, length, range, or value, and that it does not contain or introduce any invalid or illegal characters, symbols, or commands. Input sanitization is the process or the technique of removing or modifying any invalid or illegal characters, symbols, or commands from the input, or replacing them with valid or legal ones, to prevent or mitigate any potential attacks or vulnerabilities. Input filtering is the process or the technique of allowing or blocking the input based on a predefined or a configurable set of rules or criteria, such as a whitelist or a blacklist, to prevent or mitigate any unwanted or unauthorized input. Input encoding is the process or the technique of transforming or converting the input into a different or a standard format or representation, such as HTML, URL, or Base64, to prevent or mitigate any interpretation or execution of the input by the application or the system. It is tested, it is logged, and it is verified are not the assumptions that must be made about input to an application from a security perspective, although they may be related or possible aspects or outcomes of input to an application. It is tested is an aspect or an outcome of input to an application, as it implies that the input has been subjected to various tests or evaluations, such as unit testing, integration testing, or penetration testing, to verify or validate the functionality and the quality of the input, as well as to detect or report any errors, bugs, or vulnerabilities in the input. However, it is tested is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is logged is an aspect or an outcome of input to an application, as it implies that the input has been recorded or stored in a log file or a database, along with other relevant information or metadata, such as the source, the destination, the timestamp, or the status of the input, to provide a trace or a history of the input, as well as to support the audit and the compliance activities. However, it is logged is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is verified is an aspect or an outcome of input to an application, as it implies that the input has been confirmed or authenticated by the application or the system, using various security controls or mechanisms, such as digital signatures, certificates, or tokens, to ensure the integrity and the authenticity of the input, as well as to prevent or mitigate any tampering or spoofing of the input. However, it is verified is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application.Â
A chemical plan wants to upgrade the Industrial Control System (ICS) to transmit data using Ethernet instead of RS422. The project manager wants to simplify administration and maintenance by utilizing the office network infrastructure and staff to implement this upgrade.
Which of the following is the GREATEST impact on security for the network?
The network administrators have no knowledge of ICS
The ICS is now accessible from the office network
The ICS does not support the office password policy
RS422 is more reliable than Ethernet
The greatest impact on security for the network is that the ICS is now accessible from the office network. This means that the ICS is exposed to more potential threats and vulnerabilities from the internet and the office network, such as malware, unauthorized access, data leakage, or denial-of-service attacks. The ICS may also have different security requirements and standards than the office network, such as availability, reliability, and safety. Therefore, connecting the ICS to the office network increases the risk of compromising the confidentiality, integrity, and availability of the ICS and the critical infrastructure it controls. The other options are not as significant as the increased attack surface and complexity of the network. References: Guide to Industrial Control Systems (ICS) Security | NIST, page 2-1; Industrial Control Systems | Cybersecurity and Infrastructure Security Agency, page 1.
Which of the following is the MOST important security goal when performing application interface testing?
Confirm that all platforms are supported and function properly
Evaluate whether systems or components pass data and control correctly to one another
Verify compatibility of software, hardware, and network connections
Examine error conditions related to external interfaces to prevent application details leakage
The most important security goal when performing application interface testing is to examine error conditions related to external interfaces to prevent application details leakage. Application interface testing is a type of testing that focuses on the interactions between different systems or components through their interfaces, such as APIs, web services, or protocols. Error conditions related to external interfaces can occur when the input, output, or communication is invalid, incomplete, or unexpected. These error conditions can cause the application to reveal sensitive or confidential information, such as error messages, stack traces, configuration files, or database queries, which can be exploited by attackers to gain access or compromise the system. Therefore, it is important to examine these error conditions and ensure that the application handles them properly and securely. Confirming that all platforms are supported and function properly, evaluating whether systems or components pass data and control correctly to one another, and verifying compatibility of software, hardware, and network connections are not security goals, but functional or performance goals of application interface testing. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 1000; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 922.
Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow?
Memory review
Code review
Message division
Buffer division
Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious code or data into the buffer, and altering the execution flow of the program to execute the malicious code or data. Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow, as it involves examining the source code of the program to identify and fix any errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not perform any boundary checking on the buffer, and replace them with safer or more secure alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to the buffer. Code review can also help to enforce and verify the use of secure coding practices and standards, such as input validation, output encoding, error handling, or memory management, that can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message division, and buffer division are not techniques that would minimize the ability of an attacker to exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a technique, but a process of analyzing the memory layout or content of a program, such as the stack, the heap, or the registers, to understand or debug its behavior or performance. Memory review may help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or fixed-size segments or blocks, such as in cryptography or networking. Message division may help to improve the security or efficiency of the message transmission or processing, but it does not prevent or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow.
Which of the following is MOST effective in detecting information hiding in Transmission Control Protocol/internet Protocol (TCP/IP) traffic?
Stateful inspection firewall
Application-level firewall
Content-filtering proxy
Packet-filter firewall
An application-level firewall is the most effective in detecting information hiding in TCP/IP traffic. Information hiding is a technique that conceals data or messages within other data or messages, such as using steganography, covert channels, or encryption. An application-level firewall is a type of firewall that operates at the application layer of the OSI model, and inspects the content and context of the network packets, such as the headers, payloads, or protocols. An application-level firewall can help to detect information hiding in TCP/IP traffic, as it can analyze the data for any anomalies, inconsistencies, or violations of the expected format or behavior. A stateful inspection firewall, a content-filtering proxy, and a packet-filter firewall are not as effective in detecting information hiding in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the state, content, or header of the network packets, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511.
In an organization where Network Access Control (NAC) has been deployed, a device trying to connect to the network is being placed into an isolated domain. What could be done on this device in order to obtain proper
connectivity?
Connect the device to another network jack
Apply remediation’s according to security requirements
Apply Operating System (OS) patches
Change the Message Authentication Code (MAC) address of the network interface
Network Access Control (NAC) is a technology that enforces security policies and controls on the devices that attempt to access a network. NAC can verify the identity and compliance of the devices, and grant or deny access based on predefined rules and criteria. NAC can also place the devices into different domains or segments, depending on their security posture and role. One of the domains that NAC can create is the isolated domain, which is a restricted network segment that isolates the devices that do not meet the security requirements or pose a potential threat to the network. The devices in the isolated domain have limited or no access to the network resources, and are subject to remediation actions. Remediation is the process of fixing or improving the security status of the devices, by applying the necessary updates, patches, configurations, or software. Remediation can be performed automatically by the NAC system, or manually by the device owner or administrator. Therefore, the best thing that can be done on a device that is placed into an isolated domain by NAC is to apply remediation’s according to the security requirements, which can restore the device’s compliance and enable it to access the network normally.Â
Which of the following is the MOST efficient mechanism to account for all staff during a speedy nonemergency evacuation from a large security facility?
Large mantrap where groups of individuals leaving are identified using facial recognition technology
Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each exitdoor
Emergency exits with push bars with coordinates at each exit checking off the individual against a
predefined list
Card-activated turnstile where individuals are validated upon exit
Section: Security Operations
A company receives an email threat informing of an Imminent Distributed Denial of Service (DDoS) attack
targeting its web application, unless ransom is paid. Which of the following techniques BEST addresses that threat?
Deploying load balancers to distribute inbound traffic across multiple data centers
Set Up Web Application Firewalls (WAFs) to filter out malicious traffic
Implementing reverse web-proxies to validate each new inbound connection
Coordinate with and utilize capabilities within Internet Service Provider (ISP)
The best technique to address the threat of an imminent DDoS attack targeting a web application is to coordinate with and utilize the capabilities within the ISP. A DDoS attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. A DDoS attack can cause severe damage to the availability, performance, and reputation of the web application, as well as incur financial losses and legal liabilities. Therefore, it is important to have a DDoS mitigation strategy in place to prevent or minimize the impact of such attacks. One of the most effective ways to mitigate DDoS attacks is to leverage the capabilities of the ISP, as they have more resources, bandwidth, and expertise to handle large volumes of traffic and filter out malicious packets. The ISP can also provide additional services such as traffic monitoring, alerting, reporting, and analysis, as well as assist with the investigation and prosecution of the attackers. The ISP can also work with other ISPs and network operators to coordinate the response and share information about the attack. The other options are not the best techniques to address the threat of an imminent DDoS attack, as they may not be sufficient, timely, or scalable to handle the attack. Deploying load balancers, setting up web application firewalls, and implementing reverse web-proxies are some of the measures that can be taken at the application level to improve the resilience and security of the web application, but they may not be able to cope with the magnitude and complexity of a DDoS attack, especially if the attack targets the network layer or the infrastructure layer. Moreover, these measures may require more time, cost, and effort to implement and maintain, and may not be feasible to deploy in a short notice. References: What is a distributed denial-of-service (DDoS) attack?; What is a DDoS Attack? DDoS Meaning, Definition & Types | Fortinet; Denial-of-service attack - Wikipedia.
Which security access policy contains fixed security attributes that are used by the system to determine a
user’s access to a file or object?
Mandatory Access Control (MAC)
Access Control List (ACL)
Discretionary Access Control (DAC)
Authorized user control
 The security access policy that contains fixed security attributes that are used by the system to determine a user’s access to a file or object is Mandatory Access Control (MAC). MAC is a type of access control model that assigns permissions to users and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the users. MAC can provide some benefits for security, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. MAC is commonly used in military or government environments, where the data is classified according to its level of sensitivity, such as top secret, secret, confidential, or unclassified. The users are granted security clearance based on their level of trustworthiness, such as their background, their role, or their need to know. The users can only access the objects that have the same or lower security classification than their security clearance, and the objects can only be accessed by the users that have the same or higher security clearance than their security classification. This is based on the concept of no read up and no write down, which requires that a user can only read data of lower or equal sensitivity level, and can only write data of higher or equal sensitivity level. MAC contains fixed security attributes that are used by the system to determine a user’s access to a file or object, by using the following methods:
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
Which type of test would an organization perform in order to locate and target exploitable defects?
Penetration
System
Performance
Vulnerability
 Penetration testing is a type of test that an organization performs in order to locate and target exploitable defects in its information systems and networks. Penetration testing simulates a real-world attack scenario, where a tester, also known as a penetration tester or ethical hacker, tries to find and exploit the vulnerabilities in the system or network, using the same tools and techniques as a malicious attacker. The goal of penetration testing is to identify the weaknesses and gaps in the security posture of the organization, and to provide recommendations and solutions to mitigate or eliminate them. Penetration testing can help the organization improve its security awareness, compliance, and resilience, and prevent potential breaches or incidents.Â
A Denial of Service (DoS) attack on a syslog server exploits weakness in which of the following protocols?
Point-to-Point Protocol (PPP) and Internet Control Message Protocol (ICMP)
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP)
Address Resolution Protocol (ARP) and Reverse Address Resolution Protocol (RARP)
Transport Layer Security (TLS) and Secure Sockets Layer (SSL)
A DoS attack on a syslog server exploits weakness in TCP and UDP protocols. A syslog server is a server that collects and stores log messages from various devices on a network, such as routers, switches, firewalls, or servers. A syslog server uses either TCP or UDP protocols to receive log messages from the devices. A DoS attack on a syslog server can exploit the weakness of these protocols by sending a large volume of fake or malformed log messages to the syslog server, causing it to crash or become unresponsive. The other protocols are not relevant to a syslog server or a DoS attack. References: Denial-of-Service Attacks: History, Techniques & Prevention; What is a syslog server? | SolarWinds MSP.
What protocol is often used between gateway hosts on the Internet?
Exterior Gateway Protocol (EGP)
Border Gateway Protocol (BGP)
Open Shortest Path First (OSPF)
Internet Control Message Protocol (ICMP)
 Border Gateway Protocol (BGP) is a protocol that is often used between gateway hosts on the Internet. A gateway host is a network device that connects two or more different networks, such as a router or a firewall. BGP is a routing protocol that exchanges routing information between autonomous systems (ASes), which are groups of networks under a single administrative control. BGP is used to determine the best path to reach a destination network on the Internet, based on various factors such as hop count, bandwidth, latency, and policy. BGP is also used to implement interdomain routing policies, such as traffic engineering, load balancing, and security. BGP is the de facto standard for Internet routing and is widely deployed by Internet service providers (ISPs) and large enterprises. The other options are not protocols that are often used between gateway hosts on the Internet. Exterior Gateway Protocol (EGP) is an obsolete protocol that was used to exchange routing information between ASes before BGP. Open Shortest Path First (OSPF) is a protocol that is used to exchange routing information within an AS, not between ASes. Internet Control Message Protocol (ICMP) is a protocol that is used to send error and control messages between hosts and routers, not to exchange routing information. References: Border Gateway Protocol - Wikipedia; What is Border Gateway Protocol (BGP)? - Definition from WhatIs.com; What is BGP? | How BGP Routing Works | Cloudflare.
What is the foundation of cryptographic functions?
Encryption
Cipher
Hash
Entropy
 The foundation of cryptographic functions is entropy. Entropy is a measure of the randomness or unpredictability of a system or a process. Entropy is essential for cryptographic functions, such as encryption, decryption, hashing, or key generation, as it provides the security and the strength of the cryptographic algorithms and keys. Entropy can be derived from various sources, such as physical phenomena, user input, or software applications. Entropy can also be quantified in terms of bits, where higher entropy means higher randomness and higher security. Encryption, cipher, and hash are not the foundation of cryptographic functions, although they are related or important concepts or techniques. Encryption is the process of transforming plaintext or cleartext into ciphertext or cryptogram, using a cryptographic algorithm and a key, to protect the confidentiality and the integrity of the data. Encryption can be symmetric or asymmetric, depending on whether the same or different keys are used for encryption and decryption. Cipher is another term for a cryptographic algorithm, which is a mathematical function that performs encryption or decryption. Cipher can be classified into various types, such as substitution, transposition, stream, or block, depending on how they operate on the data. Hash is the process of generating a fixed-length and unique output, called a hash or a digest, from a variable-length and arbitrary input, using a one-way function, to verify the integrity and the authenticity of the data. Hash can be used for various purposes, such as digital signatures, message authentication codes, or password storage.Â
Which of the following would BEST support effective testing of patch compatibility when patches are applied to an organization’s systems?
Standardized configurations for devices
Standardized patch testing equipment
Automated system patching
Management support for patching
Standardized configurations for devices can help to reduce the complexity and variability of the systems that need to be patched, and thus facilitate the testing of patch compatibility. Standardized configurations can also help to ensure that the patches are applied consistently and correctly across the organization. Standardized patch testing equipment, automated system patching, and management support for patching are also important factors for effective patch management, but they are not directly related to testing patch compatibility. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 605; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 386.
What is the BEST location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access?
In a dedicated Demilitarized Zone (DMZ)
In its own separate Virtual Local Area Network (VLAN)
At the Internet Service Provider (ISP)
Outside the external firewall
The best location in a network to place Virtual Private Network (VPN) devices when an internal review reveals network design flaws in remote access is in a dedicated Demilitarized Zone (DMZ). A DMZ is a network segment that is located between the internal network and the external network, such as the internet. A DMZ is used to host the services or devices that need to be accessed by both the internal and external users, such as web servers, email servers, or VPN devices. A VPN device is a device that enables the establishment of a VPN, which is a secure and encrypted connection between two networks or endpoints over a public network, such as the internet. Placing the VPN devices in a dedicated DMZ can help to improve the security and performance of the remote access, as well as to isolate the VPN devices from the internal network and the external network. Placing the VPN devices in its own separate VLAN, at the ISP, or outside the external firewall are not the best locations, as they may expose the VPN devices to more risks, reduce the control over the VPN devices, or create a single point of failure for the remote access. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 509.
What are the steps of a risk assessment?
identification, analysis, evaluation
analysis, evaluation, mitigation
classification, identification, risk management
identification, evaluation, mitigation
The steps of a risk assessment are identification, analysis, and evaluation. Identification is the process of finding and listing the assets, threats, and vulnerabilities that are relevant to the risk assessment. Analysis is the process of estimating the likelihood and impact of each threat scenario and calculating the level of risk. Evaluation is the process of comparing the risk level with the risk criteria and determining whether the risk is acceptable or not. Mitigation is not part of the risk assessment, but it is part of the risk management, which is the process of applying controls to reduce or eliminate the risk. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 36; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 28.
Which of the following MUST be scalable to address security concerns raised by the integration of third-party
identity services?
Mandatory Access Controls (MAC)
Enterprise security architecture
Enterprise security procedures
Role Based Access Controls (RBAC)
Enterprise security architecture is the framework that defines the security policies, standards, guidelines, and controls that govern the security of an organization’s information systems and assets. Enterprise security architecture must be scalable to address the security concerns raised by the integration of third-party identity services, such as Identity as a Service (IDaaS) or federated identity management. Scalability means that the enterprise security architecture can accommodate the increased complexity, diversity, and volume of identity and access management transactions and interactions that result from the integration of external identity providers and consumers. Scalability also means that the enterprise security architecture can adapt to the changing security requirements and threats that may arise from the integration of third-party identity services.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
As part of the security assessment plan, the security professional has been asked to use a negative testing strategy on a new website. Which of the following actions would be performed?
Use a web scanner to scan for vulnerabilities within the website.
Perform a code review to ensure that the database references are properly addressed.
Establish a secure connection to the web server to validate that only the approved ports are open.
Enter only numbers in the web form and verify that the website prompts the user to enter a valid input.
A negative testing strategy is a type of software testing that aims to verify how the system handles invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or usability of the system. One example of a negative testing strategy is to enter only numbers in a web form that expects a text input, such as a name or an email address, and verify that the website prompts the user to enter a valid input. This can help ensure that the website has proper input validation and error handling mechanisms, and that it does not accept or process any malicious or malformed data. A web scanner, a code review, and a secure connection are not examples of a negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system.
When determining who can accept the risk associated with a vulnerability, which of the following is the MOST important?
Countermeasure effectiveness
Type of potential loss
Incident likelihood
Information ownership
Information ownership is the most important factor when determining who can accept the risk associated with a vulnerability. Information ownership is the concept that assigns the roles and responsibilities for the creation, maintenance, protection, and disposal of information assets within an organization. Information owners are the individuals or entities who have the authority and accountability for the information assets, and who can make decisions regarding the information lifecycle, classification, access, and usage. Information owners are also responsible for accepting or rejecting the risk associated with the information assets, and for ensuring that the risk is managed and communicated appropriately. Information owners can delegate some of their responsibilities to other roles, such as information custodians, information users, or information stewards, but they cannot delegate their accountability for the information assets and the associated risk. Countermeasure effectiveness, type of potential loss, and incident likelihood are not the most important factors when determining who can accept the risk associated with a vulnerability, although they are relevant or useful factors. Countermeasure effectiveness is the measure of how well a security control reduces or eliminates the risk. Countermeasure effectiveness can help to evaluate the cost-benefit and performance of the security control, and to determine the level of residual risk. Type of potential loss is the measure of the adverse impact or consequence that can result from a risk event. Type of potential loss can include financial, operational, reputational, legal, or strategic losses. Type of potential loss can help to assess the severity and priority of the risk, and to justify the investment and implementation of the security control. Incident likelihood is the measure of the probability or frequency of a risk event occurring. Incident likelihood can be influenced by various factors, such as the threat capability, the vulnerability exposure, the environmental conditions, or the historical data. Incident likelihood can help to estimate the level and trend of the risk, and to select the appropriate risk response and security control.
Which of the following is the MOST common method of memory protection?
Compartmentalization
Segmentation
Error correction
Virtual Local Area Network (VLAN) tagging
 The most common method of memory protection is segmentation. Segmentation is a technique that divides the memory space into logical segments, such as code, data, stack, and heap. Each segment has its own attributes, such as size, location, access rights, and protection level. Segmentation can help to isolate and protect the memory segments from unauthorized or unintended access, modification, or execution, as well as to prevent memory corruption, overflow, or leakage. Compartmentalization, error correction, and VLAN tagging are not methods of memory protection, but of information protection, data protection, and network protection, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 589; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 370.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
Which of the following is the MOST appropriate action when reusing media that contains sensitive data?
Erase
Sanitize
Encrypt
Degauss
The most appropriate action when reusing media that contains sensitive data is to sanitize the media. Sanitization is the process of removing or destroying all data from the media in such a way that it cannot be recovered by any means. Sanitization can be achieved by various methods, such as overwriting, degaussing, or physical destruction. Sanitization ensures that the sensitive data is not exposed or compromised when the media is reused or disposed of. Erase, encrypt, and degauss are not the most appropriate actions when reusing media that contains sensitive data, although they may be related or useful steps. Erase is the process of deleting data from the media by using the operating system or application commands or functions. Erase does not guarantee that the data is completely removed from the media, as it may leave traces or remnants that can be recovered by using special tools or techniques. Encrypt is the process of transforming data into an unreadable form by using a cryptographic algorithm and a key. Encrypt can protect the data from unauthorized access or disclosure, but it does not remove the data from the media. Encrypt also requires that the key is securely managed and stored, and that the encryption algorithm is strong and reliable. Degauss is the process of applying a strong magnetic field to the media to erase or scramble the data. Degauss can effectively sanitize magnetic media, such as hard disks or tapes, but it does not work on optical media, such as CDs or DVDs. Degauss also renders the media unusable, as it destroys the servo tracks and the firmware that are needed for the media to function properly.Â
Who is accountable for the information within an Information System (IS)?
Security manager
System owner
Data owner
Data processor
The data owner is the person who has the authority and responsibility for the information within an Information System (IS). The data owner is accountable for the security, quality, and integrity of the data, as well as for defining the classification, sensitivity, retention, and disposal of the data. The data owner must also approve or deny the access requests and periodically review the access rights. The security manager, the system owner, and the data processor are not accountable for the information within an IS, but they may have roles and responsibilities related to the security and operation of the IS. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 48; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 40.
Which of the following is a common characteristic of privacy?
Provision for maintaining an audit trail of access to the private data
Notice to the subject of the existence of a database containing relevant credit card data
Process for the subject to inspect and correct personal data on-site
Database requirements for integration of privacy data
A common characteristic of privacy is notice to the subject of the existence of a database containing relevant credit card data. Privacy is the right or the expectation of an individual or a group to control or limit the collection, use, disclosure, or retention of their personal or sensitive information by others. Privacy can involve various principles or tenets that are shared across different regulatory standards or frameworks, such as GDPR, HIPAA, or PIPEDA. One of the common privacy principles or tenets is notice, which requires that the data subject or the individual whose information is collected or processed should be informed or notified of the following aspects:
Notice can provide some benefits for privacy, such as enhancing the transparency and the accountability of the data collection or processing activities, respecting the consent and the preferences of the data subject, and supporting the compliance and the enforcement of the privacy laws or regulations. Provision for maintaining an audit trail of access to the private data, process for the subject to inspect and correct personal data on-site, and database requirements for integration of privacy data are not common characteristics of privacy, although they may be related or important aspects of privacy. Provision for maintaining an audit trail of access to the private data is a technique that involves recording and storing the logs or the records of the events or the activities that occur on a database or a system that contains private data, such as who accessed, modified, or deleted the data, when, where, how, and why. Provision for maintaining an audit trail of access to the private data can provide some benefits for privacy, such as enhancing the visibility and the traceability of the data access or processing activities, preventing or detecting any unauthorized or improper access or processing, and supporting the audit and the compliance activities. However, provision for maintaining an audit trail of access to the private data is not a common characteristic of privacy, as it is not a principle or a tenet that is shared across different regulatory standards or frameworks, and it may vary depending on the type or the nature of the private data. Process for the subject to inspect and correct personal data on-site is a technique that involves providing a mechanism or a procedure for the data subject to access and verify their personal data that is stored or processed on a database or a system, and to request or make any changes or corrections if needed, such as updating their name, address, or email. Process for the subject to inspect and correct personal data on-site can provide some benefits for privacy, such as enhancing the accuracy and the reliability of the personal data, respecting the rights and the interests of the data subject, and supporting the compliance and the enforcement of the privacy laws or regulations. However, process for the subject to inspect and correct personal data on-site is not a common characteristic of privacy, as it is not a principle or a tenet that is shared across different regulatory standards or frameworks, and it may vary depending on the type or the nature of the personal data. Database requirements for integration of privacy data are the specifications or the criteria that a database or a system that contains or processes privacy data should meet or comply with, such as the design, the architecture, the functionality, or the security of the database or the system. Database requirements for integration of privacy data can provide some benefits for privacy, such as enhancing the performance and the functionality of the database or the system, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, database requirements for integration of privacy data are not a common characteristic of privacy, as they are not a principle or a tenet that is shared across different regulatory standards or frameworks, and they may vary depending on the type or the nature of the privacy data.Â
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
 Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
 Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
 Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
 A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
 Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
 Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
 Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
 Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
 Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
 The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
 Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is the MOST crucial for a successful audit plan?
Defining the scope of the audit to be performed
Identifying the security controls to be implemented
Working with the system owner on new controls
Acquiring evidence of systems that are not compliant
 An audit is an independent and objective examination of an organization’s activities, systems, processes, or controls to evaluate their adequacy, effectiveness, efficiency, and compliance with applicable standards, policies, laws, or regulations. An audit plan is a document that outlines the objectives, scope, methodology, criteria, schedule, and resources of an audit. The most crucial element of a successful audit plan is defining the scope of the audit to be performed, which is the extent and boundaries of the audit, such as the subject matter, the time period, the locations, the departments, the functions, the systems, or the processes to be audited. The scope of the audit determines what will be included or excluded from the audit, and it helps to ensure that the audit objectives are met and the audit resources are used efficiently and effectively. Identifying the security controls to be implemented, working with the system owner on new controls, and acquiring evidence of systems that are not compliant are all important tasks in an audit, but they are not the most crucial for a successful audit plan, as they depend on the scope of the audit to be defined first. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 54. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 69.
Which item below is a federated identity standard?
802.11i
Kerberos
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup Language (SAML)
 A federated identity standard is Security Assertion Markup Language (SAML). SAML is a standard that enables the exchange of authentication and authorization information between different parties, such as service providers and identity providers, using XML-based messages called assertions. SAML can facilitate the single sign-on (SSO) process, which allows a user to access multiple services or applications with a single login session, without having to provide their credentials multiple times. SAML can also support the federated identity management, which allows a user to use their identity or credentials from one domain or organization to access the services or applications from another domain or organization, without having to create or maintain separate accounts. 802.11i, Kerberos, and LDAP are not federated identity standards, as they are related to the wireless network security, the network authentication protocol, or the directory service protocol, not the exchange of authentication and authorization information between different parties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 692. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 708.
When dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS), an organization that shares card holder information with a service provider MUST do which of the following?
Perform a service provider PCI-DSS assessment on a yearly basis.
Validate the service provider's PCI-DSS compliance status on a regular basis.
Validate that the service providers security policies are in alignment with those of the organization.
Ensure that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis.
 The action that an organization that shares card holder information with a service provider must do when dealing with compliance with the Payment Card Industry-Data Security Standard (PCI-DSS) is to validate the service provider’s PCI-DSS compliance status on a regular basis. PCI-DSS is a set of security standards that applies to any organization that stores, processes, or transmits card holder data, such as credit or debit card information. PCI-DSS aims to protect the card holder data from unauthorized access, use, disclosure, or theft, and to ensure the security and integrity of the payment transactions. If an organization shares card holder data with a service provider, such as a payment processor, a hosting provider, or a cloud provider, the organization is still responsible for the security and compliance of the card holder data, and must ensure that the service provider also meets the PCI-DSS requirements. The organization must validate the service provider’s PCI-DSS compliance status on a regular basis, by obtaining and reviewing the service provider’s PCI-DSS assessment reports, such as the Self-Assessment Questionnaire (SAQ), the Report on Compliance (ROC), or the Attestation of Compliance (AOC). Performing a service provider PCI-DSS assessment on a yearly basis, validating that the service provider’s security policies are in alignment with those of the organization, and ensuring that the service provider updates and tests its Disaster Recovery Plan (DRP) on a yearly basis are not the actions that an organization that shares card holder information with a service provider must do when dealing with compliance with PCI-DSS, as they are not sufficient or relevant to verify the service provider’s PCI-DSS compliance status or to protect the card holder data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 49. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 64.
When implementing a secure wireless network, which of the following supports authentication and authorization for individual client endpoints.
Temporal Key Integrity Protocol (TKIP)
Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK)
Wi-Fi Protected Access 2 (WPA2) Enterprise
Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP)
When implementing a secure wireless network, the option that supports authentication and authorization for individual client endpoints is Wi-Fi Protected Access 2 (WPA2) Enterprise. WPA2 is a security protocol that provides encryption and authentication for wireless networks, based on the IEEE 802.11i standard. WPA2 has two modes: Personal and Enterprise. WPA2 Personal uses a Pre-Shared Key (PSK) that is shared among all the devices on the network, and does not require a separate authentication server. WPA2 Enterprise uses an Extensible Authentication Protocol (EAP) that authenticates each device individually, using a username and password or a certificate, and requires a Remote Authentication Dial-In User Service (RADIUS) server or another authentication server. WPA2 Enterprise provides more security and granularity than WPA2 Personal, as it can support different levels of access and permissions for different users or groups, and can prevent unauthorized or compromised devices from joining the network. Temporal Key Integrity Protocol (TKIP), Wi-Fi Protected Access (WPA) Pre-Shared Key (PSK), and Counter Mode with Cipher Block Chaining Message Authentication Code Protocol (CCMP) are not the options that support authentication and authorization for individual client endpoints, as they are related to the encryption or integrity of the wireless data, not the identity or access of the wireless devices. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 506. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 522.
What does secure authentication with logging provide?
Data integrity
Access accountability
Encryption logging format
Segregation of duties
Secure authentication with logging provides access accountability, which means that the actions of users can be traced and audited. Logging can help identify unauthorized or malicious activities, enforce policies, and support investigations12
Which of the following is critical for establishing an initial baseline for software components in the operation and maintenance of applications?
Application monitoring procedures
Configuration control procedures
Security audit procedures
Software patching procedures
Configuration control procedures are critical for establishing an initial baseline for software components in the operation and maintenance of applications. Configuration control procedures are the processes and activities that ensure the integrity, consistency, and traceability of the software components throughout the SDLC. Configuration control procedures include identifying, documenting, storing, reviewing, approving, and updating the software components, as well as managing the changes and versions of the components. By establishing an initial baseline, the organization can have a reference point for measuring and evaluating the performance, quality, and security of the software components, and for applying and tracking the changes and updates to the components. The other options are not as critical as configuration control procedures, as they either do not establish an initial baseline (A and C), or do not apply to all software components (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 468; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 568.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
 Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
What BEST describes the confidentiality, integrity, availability triad?
A tool used to assist in understanding how to protect the organization's data
The three-step approach to determine the risk level of an organization
The implementation of security systems to protect the organization's data
A vulnerability assessment to see how well the organization's data is protected
 The confidentiality, integrity, availability triad, or CIA triad, is a tool used to assist in understanding how to protect the organization’s data. The CIA triad is a model that defines the three fundamental and interrelated security objectives of information security, which are:
A breach investigation …… a website was exploited through an open soured …..Is The FIRB Stan In the Process that could have prevented this breach?
Application whitelisting
Web application firewall (WAF)
Vulnerability remediation
Software inventory
Vulnerability remediation is the process that could have prevented the breach of a website that was exploited through an open source component. Vulnerability remediation involves identifying, assessing, and resolving the vulnerabilities or weaknesses that may affect the security or the functionality of the systems or the components. Vulnerability remediation can prevent the breach of a website that was exploited through an open source component, because it can:
The other options are not the processes that could have prevented the breach of a website that was exploited through an open source component. Application whitelisting is the process of allowing only the authorized or trusted applications or components to run or execute on the system or the network, and blocking or restricting the unauthorized or untrusted applications or components. Application whitelisting could not have prevented the breach of a website that was exploited through an open source component, because the open source component may have been authorized or trusted by the system or the network, but it may still have some vulnerabilities or issues that could be exploited. Web application firewall (WAF) is the process of protecting the web applications or the websites from the common or the specific web-based attacks or threats, such as SQL injection, cross-site scripting, or denial-of-service, by filtering, monitoring, or blocking the incoming or outgoing web traffic. WAF could not have prevented the breach of a website that was exploited through an open source component, because the open source component may have been exploited by an attack or a threat that is not web-based, or that is not detected or blocked by the WAF. Software inventory is the process of maintaining and managing the records or the information of the software or the components that are installed or used on the system or the network, such as the name, the version, the license, or the source. Software inventory could not have prevented the breach of a website that was exploited through an open source component, because the software inventory may not have the details or the status of the vulnerabilities or the issues of the software or the components, or the patches, fixes, or updates that are available or required for them. References: [CISSP All-in-One Exam Guide, Eighth Edition], Chapter 4: Security Architecture and Engineering, page 451. [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Chapter 4: Security Architecture and Engineering, page 452.
A financial company has decided to move its main business application to the Cloud. The legal department objects, arguing that the move of the platform should comply with several regulatory obligations such as the General Data Protection (GDPR) and ensure data confidentiality. The Chief Information Security Officer (CISO) says that the cloud provider has met all regulations requirements and even provides its own encryption solution with internally-managed encryption keys to address data confidentiality. Did the CISO address all the legal requirements in this situation?
No, because the encryption solution is internal to the cloud provider.
Yes, because the cloud provider meets all regulations requirements.
Yes, because the cloud provider is GDPR compliant.
No, because the cloud provider is not certified to host government data.
The CISO did not address all the legal requirements in this situation, because the encryption solution is internal to the cloud provider. Moving the main business application to the cloud involves transferring the data and the processing of the data from the organization’s own premises to the cloud provider’s premises. This may raise several legal and regulatory issues, such as the compliance with the data protection laws, the data sovereignty laws, the data breach notification laws, and the contractual obligations. The General Data Protection Regulation (GDPR) is one of the data protection laws that applies to the organizations that process the personal data of the individuals in the European Union (EU), regardless of where the processing takes place. The GDPR requires the organizations to ensure the confidentiality, the integrity, and the availability of the personal data, and to implement appropriate technical and organizational measures to protect the personal data from unauthorized or unlawful access, use, disclosure, alteration, or destruction. One of the technical measures that can be used to protect the personal data is encryption, which is a technique that transforms the data into an unreadable or unintelligible form, using a key and an algorithm, and that prevents unauthorized access, modification, or disclosure of the data. However, the encryption solution that the cloud provider offers is internal to the cloud provider, meaning that the cloud provider has the control and the access to the encryption keys and the encryption algorithms. This may pose a risk to the data confidentiality, as the cloud provider may be able to decrypt the data, or may be compelled to disclose the data to third parties, such as law enforcement agencies or other governments. Therefore, the CISO did not address all the legal requirements in this situation, as the encryption solution is internal to the cloud provider, and does not guarantee the data confidentiality. The organization may need to use its own encryption solution, or to negotiate with the cloud provider to have more control and visibility over the encryption keys and the encryption algorithms. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 120. CISSP Practice Exam – FREE 20 Questions and Answers, Question 19.
An organization has a short-term agreement with a public Cloud Service Provider
(CSP). Which of the following BEST protects sensitive data once the agreement
expires and the assets are reused?
Recommended that the business data owners use continuous monitoring and analysis of applications to prevent data loss.
Recommend that the business data owners use internal encryption keys for data-at-rest and data-in-transit to the storage environment.
Use a contractual agreement to ensure the CSP wipes the data from the storage environment.
Use a National Institute of Standards and Technology (NIST) recommendation for wiping data on the storage environment.
When an organization uses a public cloud service provider (CSP) to store sensitive data, it should ensure that the data is protected both during and after the service agreement. One of the best ways to do this is to use a contractual agreement that specifies the CSP’s obligations and responsibilities for wiping the data from the storage environment once the agreement expires and the assets are reused. This way, the organization can hold the CSP accountable for the secure deletion of the data and prevent any unauthorized access or disclosure of the data by the CSP or other customers. Using internal encryption keys, continuous monitoring, or NIST recommendations are good practices, but they do not guarantee that the CSP will wipe the data from the storage environment. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Cloud Computing and Virtualization, page 281; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.9, page 216.
Verify the camera's log for recent logins outside of the Internet Technology (IT) department.
Verify the security and encryption protocol the camera uses.
Verify the security camera requires authentication to log into the management console.
Verify the most recent firmware version is installed on the camera.
Verifying the security camera requires authentication to log into the management console is the best way to ensure the security of the camera. Authentication is the process of verifying the identity of a user or device that attempts to access a system or resource. Authentication prevents unauthorized access, modification, or misuse of the camera and its data. Authentication can be done using different factors, such as passwords, tokens, biometrics, or certificates. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 201. Free daily CISSP practice questions, Question 2.
While reviewing the financial reporting risks of a third-party application, which of the following Service Organization Control (SOC) reports will be the MOST useful?
ISIsOC 1
SOC 2
SOC 3
SOC for cybersecurity
ISIsOC 1 is the most useful Service Organization Control (SOC) report for reviewing the financial reporting risks of a third-party application, because it focuses on the internal controls over financial reporting (ICFR) of the service organization. ISIsOC 1 reports are based on the Statement on Standards for Attestation Engagements (SSAE) No. 18, and can be either Type 1 or Type 2, depending on whether they provide a point-in-time or a period-of-time evaluation of the controls. SOC 2, SOC 3, and SOC for cybersecurity reports are based on the Trust Services Criteria, and cover different aspects of the service organization’s security, availability, confidentiality, processing integrity, and privacy. They are not specifically designed for financial reporting risks. References: CISSP Official Study Guide, 9th Edition, page 1016; CISSP All-in-One Exam Guide, 8th Edition, page 1095
How long should the records on a project be retained?
For the duration of the project, or at the discretion of the record owner
Until they are no longer useful or required by policy
Until five years after the project ends, then move to archives
For the duration of the organization fiscal year
Records on a project are any documents or data that provide evidence of the project activities, results, or decisions. Records on a project should be retained until they are no longer useful or required by policy. The retention period of records may vary depending on the type, purpose, and value of the records, as well as the legal, regulatory, or contractual obligations of the organization. Retaining records for the duration of the project, or at the discretion of the record owner, may not be sufficient or consistent with the organization’s policies. Retaining records until five years after the project ends, or for the duration of the organization’s fiscal year, may not be necessary or appropriate for all records. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 45; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 1: Security and Risk Management, Question 1.17, page 51.
Which of the following BEST provides for non-repudiation od user account actions?
Centralized authentication system
File auditing system
Managed Intrusion Detection System (IDS)
Centralized logging system
A centralized logging system is the best option for providing non-repudiation of user account actions. Non-repudiation is the ability to prove that a certain action or event occurred and who was responsible for it, without the possibility of denial or dispute. A centralized logging system is a system that collects, stores, and analyzes the log records generated by various sources, such as applications, servers, devices, or users. A centralized logging system can provide non-repudiation by capturing and preserving the evidence of the user account actions, such as the timestamp, the username, the IP address, the action performed, and the outcome. A centralized logging system can also prevent the tampering or deletion of the log records by using encryption, hashing, digital signatures, or write-once media. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 382. CISSP Practice Exam | Boson, Question 10.
A small office is running WiFi 4 APs, and neighboring offices do not want to increase the throughput to associated devices. Which of the following is the MOST cost-efficient way for the office to increase network performance?
Add another AP.
Disable the 2.4GHz radios
Enable channel bonding.
Upgrade to WiFi 5.
The most cost-efficient way for the office to increase network performance is to upgrade to WiFi 5, which is the latest generation of wireless technology that offers faster speeds, lower latency, and higher capacity than WiFi 4. WiFi 5 operates on both 2.4GHz and 5GHz bands, and supports features such as MU-MIMO, beamforming, and channel bonding, which can improve the throughput and efficiency of the wireless network. Upgrading to WiFi 5 may require replacing the existing APs and devices with compatible ones, but it may not be as expensive or complex as the other options. The other options are either ineffective or impractical for increasing network performance, as they may not address the root cause of the problem, may interfere with the neighboring offices, or may require additional hardware or configuration. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.3 Secure network components, 4.1.3.1 Wireless access points
Which of the following is security control volatility?
A reference to the stability of the security control.
A reference to how unpredictable the security control is.
A reference to the impact of the security control.
A reference to the likelihood of change in the security control.
Security control volatility is a reference to the likelihood of change in the security control. Security control volatility is a factor that affects the selection, implementation, and maintenance of security controls in an organization. Security control volatility can be influenced by various internal and external factors, such as business needs, technology trends, regulatory requirements, threat landscape, and risk appetite. Security control volatility can have implications for the security posture, performance, and cost of the organization. The other options are not definitions of security control volatility, as they either do not relate to the change in the security control, or do not reflect the volatility aspect. References: CISSP - Certified Information Systems Security Professional, Domain 1. Security and Risk Management, 1.4 Understand and apply risk management concepts, 1.4.3 Determine risk management strategy, 1.4.3.1 Security control volatility; CISSP Exam Outline, Domain 1. Security and Risk Management, 1.4 Understand and apply risk management concepts, 1.4.3 Determine risk management strategy, 1.4.3.1 Security control volatility
An organization that has achieved a Capability Maturity model Integration (CMMI) level of 4 has done which of the following?
Addressed continuous innovative process improvement
Addressed the causes of common process variance
Achieved optimized process performance
Achieved predictable process performance
An organization that has achieved a Capability Maturity Model Integration (CMMI) level of 4 has done the following: achieved predictable process performance. CMMI is a framework that provides a set of best practices and guidelines for improving the capability and maturity of the processes of an organization, such as software development, service delivery, or project management. CMMI consists of five levels, each of which represents a different stage or degree of process improvement, from initial to optimized. The five levels of CMMI are:
An organization that has achieved a CMMI level of 4 has done the following: achieved predictable process performance, meaning that the organization has established quantitative objectives and metrics for the processes, and has used statistical and analytical techniques to monitor and control the variation and performance of the processes, and to ensure that the processes meet the expected or desired outcomes. An organization that has achieved a CMMI level of 4 has not done the following: addressed continuous innovative process improvement, addressed the causes of common process variance, or achieved optimized process performance, as these are the characteristics or achievements of a CMMI level of 5, which is the highest and most mature level of CMMI. References:
When a system changes significantly, who is PRIMARILY responsible for assessing the security impact?
Chief Information Security Officer (CISO)
Information System Owner
Information System Security Officer (ISSO)
Authorizing Official
The Information System Security Officer (ISSO) is the person who is responsible for ensuring that the appropriate operational security posture is maintained for an information system or program. The ISSO is also responsible for assessing the security impact of any significant changes to the system, such as configuration, patching, or upgrading. The ISSO should coordinate with the Information System Owner, the Authorizing Official, and the Chief Information Security Officer (CISO) to report and mitigate any security risks or issues arising from the system changes. References:
A software developer wishes to write code that will execute safely and only as intended. Which of the following programming language types is MOST likely to achieve this goal?
Statically typed
Weakly typed
Strongly typed
Dynamically typed
A strongly typed programming language is a type of programming language that enforces strict rules and constraints on the data types and operations that can be used in the code. A strongly typed language prevents or detects errors such as type mismatch, type conversion, or memory allocation at compile time or run time, and ensures that the code executes safely and only as intended. A strongly typed language also supports features such as type inference, type checking, and type safety, which enhance the readability, maintainability, and security of the code. Examples of strongly typed languages are Java, C#, and Python. A strongly typed language is different from a weakly typed language, which is a type of programming language that allows more flexibility and leniency on the data types and operations that can be used in the code. A weakly typed language may perform implicit type conversion, type coercion, or type casting at run time, and may not detect or report errors until they cause unexpected or undesirable results. A weakly typed language may also have features such as dynamic typing, duck typing, or polymorphism, which enable the code to handle different types of data or objects at run time. Examples of weakly typed languages are JavaScript, PHP, and Perl. A strongly typed language is also different from a statically typed language, which is a type of programming language that assigns and checks the data types of variables and expressions at compile time. A statically typed language requires the programmer to declare the data types of variables and expressions explicitly in the code, and ensures that the code is consistent and compatible with the data types before execution. Examples of statically typed languages are C, C++, and Java. A statically typed language is also different from a dynamically typed language, which is a type of programming language that assigns and checks the data types of variables and expressions at run time. A dynamically typed language does not require the programmer to declare the data types of variables and expressions explicitly in the code, and allows the code to adapt and change the data types during execution. Examples of dynamically typed languages are Python, Ruby, and JavaScript. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10: Software Development Security, page 657. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, page 1009.
Why is planning the MOST critical phase of a Role Based Access Control (RBAC) implementation?
The criteria for measuring risk is defined.
User populations to be assigned to each role is determined.
Role mining to define common access patterns is performed.
The foundational criteria are defined.
Role mining to define common access patterns is the task that is performed in the planning phase of a Role Based Access Control (RBAC) implementation, and it is the most critical task in this phase. RBAC is a type of access control that grants or denies access to a system or a resource based on the roles that are assigned to the users. Roles are the collections of permissions or privileges that correspond to the functions or the responsibilities of the users in the organization. Role mining is a technique that involves analyzing the existing user accounts and their access rights, and identifying the common access patterns or the similarities among them. Role mining can help define the roles and the role hierarchies that are suitable for the organization, and that can simplify and optimize the access management process. Role mining can also help reduce the complexity and the redundancy of the access rights, and improve the security and the efficiency of the RBAC system. Role mining is performed in the planning phase of the RBAC implementation, which is the phase where the objectives, the scope, the requirements, and the resources for the RBAC system are defined and established. Role mining is the most critical task in this phase, as it can affect the design, the deployment, and the operation of the RBAC system, and it can determine the success or the failure of the RBAC implementation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 219. CISSP Testking ISC Exam Questions, Question 17.
The security team is notified that a device on the network is infected with malware. Which of the following is MOST effective in enabling the device to be quickly located and remediated?
Data loss protection (DLP)
Intrusion detection
Vulnerability scanner
Information Technology Asset Management (ITAM)
Information Technology Asset Management (ITAM) is the most effective tool in enabling the device on the network to be quickly located and remediated. ITAM is a process that tracks and manages the inventory, configuration, and lifecycle of the IT assets in an organization, such as hardware, software, and network devices. ITAM can help to identify and locate the device that is infected with malware, by providing information such as the device name, IP address, MAC address, serial number, location, owner, and status. ITAM can also help to remediate the device, by providing the necessary tools and procedures to isolate, quarantine, scan, clean, or replace the device. The other options are not as effective as ITAM, as they either do not locate or remediate the device, do not focus on the device level, or do not address the malware issue. References: CISSP - Certified Information Systems Security Professional, Domain 7. Security Operations, 7.1 Understand and support investigations, 7.1.2 Conduct logging and monitoring activities, 7.1.2.1 Asset management; CISSP Exam Outline, Domain 7. Security Operations, 7.1 Understand and support investigations, 7.1.2 Conduct logging and monitoring activities, 7.1.2.1 Asset management
Using Address Space Layout Randomization (ASLR) reduces the potential for which of the following attacks?
SQL injection (SQLi)
Man-in-the-middle (MITM)
Cross-Site Scripting (XSS)
Heap overflow
 Address Space Layout Randomization (ASLR) is a security technique that randomizes the memory locations of the executable code, data, and libraries of a software application or system, making it harder for attackers to predict or manipulate the memory addresses of the target. ASLR reduces the potential for heap overflow attacks, which are a type of buffer overflow attack that exploit the memory allocation and deallocation functions of the heap, which is a dynamic memory area where variables and objects are stored during the execution of a program. Heap overflow attacks can result in arbitrary code execution, denial of service, or privilege escalation. ASLR makes heap overflow attacks more difficult by changing the base address of the heap each time the program runs, making it less likely for the attacker to find or overwrite the memory locations of the heap variables or objects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2071-2072; [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Domain 8: Software Development Security, pp. 1439-1440.
Which is the MOST effective countermeasure to prevent electromagnetic emanations on unshielded data cable?
Move cable are away from exterior facing windows
Encase exposed cable runs in metal conduit
Enable Power over Ethernet (PoE) to increase voltage
Bundle exposed cables together to disguise their signals
Encasing exposed cable runs in metal conduit is the most effective countermeasure to prevent electromagnetic emanations on unshielded data cable. Electromagnetic emanations are the unintentional radiation of electromagnetic signals from electronic devices, such as computers, monitors, or cables. These signals can be intercepted and analyzed by attackers to obtain sensitive information. Unshielded data cable, such as twisted pair or coaxial cable, is more susceptible to electromagnetic emanations than shielded cable, such as fiber optic cable. Encasing unshielded cable in metal conduit can reduce the amount of emanations and provide physical protection from tampering. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 164; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 238]
If an employee transfers from one role to another, which of the following actions should this trigger within the identity and access management (IAM) lifecycle?
New account creation
User access review and adjustment
Deprovisioning
System account access review and adjustment
 User access review and adjustment is the action that should be triggered within the identity and access management (IAM) lifecycle when an employee transfers from one role to another. IAM is the process of identifying, authenticating, authorizing, and managing the users and their access rights to the organization’s resources and systems. The IAM lifecycle consists of four phases: provisioning, maintenance, review, and deprovisioning. When an employee transfers from one role to another, their access rights may need to be changed or updated to reflect their new responsibilities and duties. This requires a user access review and adjustment, which is part of the maintenance phase of the IAM lifecycle. User access review and adjustment involves verifying and validating the user’s identity and current access rights, and modifying or revoking the access rights as needed, based on the principle of least privilege and the organization’s policies and standards. The other options are not correct. New account creation is part of the provisioning phase of the IAM lifecycle, and it is not necessary when an employee transfers from one role to another, unless the employee needs to access a new system or resource that requires a separate account. Deprovisioning is part of the deprovisioning phase of the IAM lifecycle, and it involves deleting or disabling the user’s account and access rights when the user leaves the organization or no longer needs access to the system or resource. System account access review and adjustment is not a specific action within the IAM lifecycle, although it may be part of the user access review and adjustment process, if the user has access to system accounts or privileges. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, page 615. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Communication and Network Security, page 616.
Which type of access control includes a system that allows only users that are type=managers and department=sales to access employee records?
Discretionary access control (DAC)
Mandatory access control (MAC)
Role-based access control (RBAC)
Attribute-based access control (ABAC)
Attribute-based access control (ABAC) is a type of access control that includes a system that allows only users that are type=managers and department=sales to access employee records. ABAC is a flexible and granular access control model that uses attributes to define access rules and policies, and to make access decisions. Attributes are characteristics or properties of entities, such as users, resources, actions, or environments. For example, a user attribute can be the role, department, clearance, or location of the user. A resource attribute can be the type, classification, owner, or location of the resource. An action attribute can be the read, write, execute, or delete operation on the resource. An environment attribute can be the time, date, network address, or device of the access request. ABAC evaluates the attributes of the subject (user), the object (resource), the requested action, and the environment, and compares them with the predefined rules and policies to grant or deny access. For example, a rule can state that only users with the attribute type=managers and department=sales can access resources with the attribute type=employee records and action=read. ABAC can enforce dynamic and context-aware access control policies, and support complex scenarios involving multiple subjects, objects, and actions. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 294. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), page 607.
The Industrial Control System (ICS) Computer Emergency Response Team (CERT) has released an alert regarding ICS-focused malware specifically propagating through Windows-based business networks. Technicians at a local water utility note that their dams, canals, and locks controlled by an internal Supervisory Control and Data Acquisition (SCADA) system have been malfunctioning. A digital forensics professional is consulted in the Incident Response (IR) and recovery. Which of the following is the
MOST challenging aspect of this investigation?
SCADA network latency
Group policy implementation
Volatility of data
Physical access to the system
The volatility of data refers to the degree to which data can be lost or altered due to various factors, such as power loss, hardware failure, software error, or human intervention. In a digital forensics investigation, the volatility of data poses a challenge because it requires the investigator to follow a specific order of volatility when collecting and preserving evidence. The order of volatility is based on the principle that the most volatile data should be collected first, before it is overwritten or destroyed by less volatile data. The order of volatility typically includes the following types of data, from most volatile to least volatile: registers, cache, random access memory (RAM), routing tables, kernel statistics, process tables, network connections, executable files, swap files, hard disk, remote logging and monitoring data, physical configuration, network topology, archival media3 . References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 693; [CISSP CBK, Fifth Edition, Chapter 8, page 1049].
Which of the following attacks is dependent upon the compromise of a secondary target in order to reach the primary target?
Watering hole
Brute force
Spear phishing
Address Resolution Protocol (ARP) poisoning
A watering hole attack is a type of attack that targets a specific group of users by compromising a website that they frequently visit. The attacker then uses the compromised website to deliver malware or exploit code to the visitors, hoping to infect their systems and gain access to their networks or data. A watering hole attack is dependent upon the compromise of a secondary target (the website) in order to reach the primary target (the users). A brute force attack, a spear phishing attack, and an Address Resolution Protocol (ARP) poisoning attack are not dependent on compromising a secondary target. References:
A company is attempting to enhance the security of its user authentication processes. After evaluating several options, the company has decided to utilize Identity as a Service (IDaaS).
Which of the following factors leads the company to choose an IDaaS as their solution?
In-house development provides more control.
In-house team lacks resources to support an on-premise solution.
Third-party solutions are inherently more secure.
Third-party solutions are known for transferring the risk to the vendor.
The factor that leads the company to choose an IDaaS as their solution is that the in-house team lacks resources to support an on-premise solution. IDaaS is a cloud-based service that provides identity and access management capabilities, such as single sign-on, multi-factor authentication, or identity federation, to the users and applications of an organization. IDaaS can offer the following advantages over an on-premise solution, which is a solution that is installed and managed by the organization itself, on its own servers or infrastructure:
A hospital’s building controls system monitors and operates the environmental equipment to maintain a safe and comfortable environment. Which of the following could be used to minimize the risk of utility supply interruption?
Digital devices that can turn equipment off and continuously cycle rapidly in order to increase supplies and conceal activity on the hospital network
Standardized building controls system software with high connectivity to hospital networks
Lock out maintenance personnel from the building controls system access that can impact critical utility supplies
Digital protection and control devices capable of minimizing the adverse impact to critical utility
The best option to minimize the risk of utility supply interruption for a hospital’s building controls system is to use digital protection and control devices capable of minimizing the adverse impact to critical utility. Digital protection and control devices are devices that monitor and regulate the utility supply, such as electricity, water, or gas, and detect and respond to any faults, anomalies, or disruptions in the utility supply. Digital protection and control devices can minimize the adverse impact to critical utility by isolating the affected components, switching to alternative sources, adjusting the load or demand, or activating backup or emergency systems. Digital protection and control devices can help to ensure the continuity and reliability of the utility supply, and to prevent or mitigate any potential damage or harm to the hospital’s building controls system, or to the patients and staff12. References: CISSP CBK, Fifth Edition, Chapter 4, page 383; CISSP Practice Exam – FREE 20 Questions and Answers, Question 17.
What is the FINAL step in the waterfall method for contingency planning?
Maintenance
Testing
Implementation
Training
The final step in the waterfall method for contingency planning is maintenance. Contingency planning is a process that involves the identification, analysis, and preparation of the actions, measures, or solutions, that can be taken or implemented in the event of a disruption or interruption that may affect the organization, such as natural disasters, human errors, or cyberattacks. Contingency planning can help to ensure the continuity, availability, or reliability of the organization, as well as to protect the organization from various impacts or consequences of the disruption or interruption, such as loss of revenue, reputation, or customers. Contingency planning can follow various methods, models, or frameworks, such as the waterfall method, the agile method, or the spiral method, that can define, structure, or guide the contingency planning process, by using various phases, stages, or steps, such as initiation, planning, testing, implementation, or review. The final step in the waterfall method for contingency planning is maintenance, which means to monitor, update, or improve the contingency plan, actions, measures, or solutions, that are taken or implemented in the event of a disruption or interruption, to ensure the effectiveness, efficiency, or relevance of the contingency plan, actions, measures, or solutions, as well as to address any changes, issues, or feedbacks, that may arise or occur in the organization, environment, or situation. Maintenance can help to ensure the quality, performance, or compliance of the contingency plan, actions, measures, or solutions, by providing a continuous, consistent, or adaptive process, that can evaluate, measure, or enhance the contingency plan, actions, measures, or solutions. Testing, implementation, or training are not the final steps in the waterfall method for contingency planning, as they are either more related to the other phases, stages, or steps, such as testing, which means to verify, validate, or simulate the contingency plan, actions, measures, or solutions, that are taken or implemented in the event of a disruption or interruption, to ensure the functionality, reliability, or security of the contingency plan, actions, measures, or solutions, implementation, which means to execute, activate, or apply the contingency plan, actions, measures, or solutions, that are taken or implemented in the event of a disruption or interruption, to ensure the continuity, availability, or recovery of the organization, or training, which means to educate, instruct, or inform the personnel, stakeholders, or customers, about the contingency plan, actions, measures, or solutions, that are taken or implemented in the event of a disruption or interruption, to ensure the awareness, preparedness, or participation of the personnel, stakeholders, or customers, that are performed or conducted before the maintenance, during the contingency planning process, or to the other activities, tasks, or functions, such as verification, execution, or education, that are performed or conducted during the contingency planning process, rather than to the maintenance, during the contingency planning process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 443; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 7: Security Operations, Question 7.12, page 275.
A user sends an e-mail request asking for read-only access to files that are not considered sensitive. A Discretionary Access Control (DAC) methodology is in place. Which is the MOST suitable approach that the administrator should take?
Administrator should request data owner approval to the user access
Administrator should request manager approval for the user access
Administrator should directly grant the access to the non-sensitive files
Administrator should assess the user access need and either grant or deny the access
According to the CISSP Official (ISC)2 Practice Tests3, the most suitable approach that the administrator should take when a user requests read-only access to files that are not considered sensitive in a Discretionary Access Control (DAC) methodology is to request data owner approval to the user access. DAC is a type of access control that grants or denies access to an object based on the identity and permissions of the subject, and the discretion of the owner of the object. The owner of the object has the authority and responsibility to determine who can access the object and what level of access they can have, such as read, write, execute, or delete. The owner can also delegate the access rights to other subjects or groups, or revoke them as needed. The administrator is the person who manages and maintains the system and the access control mechanisms, but does not have the authority to grant or deny access to the objects without the owner’s consent. Therefore, the administrator should request data owner approval to the user access, regardless of the sensitivity of the files, to ensure that the access is authorized and compliant with the DAC methodology. Requesting manager approval for the user access is not the most suitable approach, as the manager may not be the owner of the files, and may not have the authority or knowledge to grant or deny access to the files. Directly granting the access to the non-sensitive files is not the most suitable approach, as it may violate the DAC methodology and the owner’s discretion, and may introduce unauthorized or excessive access to the files. Assessing the user access need and either granting or denying the access is not the most suitable approach, as it may violate the DAC methodology and the owner’s discretion, and may introduce unauthorized or excessive access to the files. References: 3
Which of the following is the MOST important output from a mobile application threat modeling exercise according to Open Web Application Security Project (OWASP)?
Application interface entry and endpoints
The likelihood and impact of a vulnerability
Countermeasures and mitigations for vulnerabilities
A data flow diagram for the application and attack surface analysis
The most important output from a mobile application threat modeling exercise according to OWASP is a data flow diagram for the application and attack surface analysis. A data flow diagram is a graphical representation of the data flows and processes within the application, as well as the external entities and boundaries that interact with the application. An attack surface analysis is a systematic evaluation of the potential vulnerabilities and threats that can affect the application, based on the data flow diagram and other sources of information. These two outputs can help identify and prioritize the security risks and requirements for the mobile application, as well as the countermeasures and mitigations for the vulnerabilities.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 487; [Official
A vulnerability in which of the following components would be MOST difficult to detect?
Kernel
Shared libraries
Hardware
System application
 According to the CISSP CBK Official Study Guide, a vulnerability in hardware would be the most difficult to detect. A vulnerability is a weakness or exposure in a system, network, or application, which may be exploited by threats and cause harm to the organization or its assets. A vulnerability can exist in various components of a system, network, or application, such as the kernel, the shared libraries, the hardware, or the system application. A vulnerability in hardware would be the most difficult to detect, as it may require physical access, specialized tools, or advanced skills to identify and measure the vulnerability. Hardware is the physical or tangible component of a system, network, or application that provides the basic functionality, performance, and support for the system, network, or application, such as the processor, memory, disk, or network card. Hardware may have vulnerabilities due to design flaws, manufacturing defects, configuration errors, or physical damage. A vulnerability in hardware may affect the security, reliability, or availability of the system, network, or application, such as causing data leakage, performance degradation, or system failure. A vulnerability in the kernel would not be the most difficult to detect, although it may be a difficult to detect. The kernel is the core or central component of a system, network, or application that provides the basic functionality, performance, and control for the system, network, or application, such as the operating system, the hypervisor, or the firmware. The kernel may have vulnerabilities due to design flaws, coding errors, configuration errors, or malicious modifications. A vulnerability in the kernel may affect the security, reliability, or availability of the system, network, or application, such as causing privilege escalation, system compromise, or system crash. A vulnerability in the kernel may be detected by using various tools, techniques, or methods, such as code analysis, vulnerability scanning, or penetration testing. A vulnerability in the shared libraries would not be the most difficult to detect, although it may be a difficult to detect. The shared libraries are the reusable or common components of a system, network, or application, that provide the functionality, performance, and compatibility for the system, network, or application, such as the dynamic link libraries, the application programming interfaces, or the frameworks.Â
When using Generic Routing Encapsulation (GRE) tunneling over Internet Protocol version 4 (IPv4), where is the GRE header inserted?
Into the options field
Between the delivery header and payload
Between the source and destination addresses
Into the destination address
Generic Routing Encapsulation (GRE) is a protocol that encapsulates a packet of one protocol type within another protocol type4. When using GRE tunneling over IPv4, the GRE header is inserted between the delivery header and the payload5. The delivery header contains the new source and destination IP addresses of the tunnel endpoints, while the payload contains the original IP packet4. The GRE header contains information such as protocol type, checksum, and key6.
Which of the following describes the BEST configuration management practice?
After installing a new system, the configuration files are copied to a separate back-up system and hashed to detect tampering.
After installing a new system, the configuration files are copied to an air-gapped system and hashed to detect tampering.
The firewall rules are backed up to an air-gapped system.
A baseline configuration is created and maintained for all relevant systems.
The best configuration management practice is to create and maintain a baseline configuration for all relevant systems. A baseline configuration is a documented and approved set of specifications and settings for a system or component that serves as a standard for comparison and evaluation. A baseline configuration can help ensure the consistency, security, and performance of the system or component, as well as facilitate the identification and resolution of any deviations or issues. A baseline configuration should be updated and reviewed regularly to reflect the changes and improvements made to the system or component12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, p. 456; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, p. 869.
To protect auditable information, which of the following MUST be configured to only allow read access?
Logging configurations
Transaction log files
User account configurations
Access control lists (ACL)
 To protect auditable information, transaction log files must be configured to only allow read access. Transaction log files are files that record and store the details or the history of the transactions or the activities that occur within a system or a database, such as the date, the time, the user, the action, or the outcome. Transaction log files are important for auditing purposes, as they can provide the evidence or the proof of the transactions or the activities that occur within a system or a database, and they can also support the recovery or the restoration of the system or the database in case of a failure or a corruption. To protect auditable information, transaction log files must be configured to only allow read access, which means that only authorized users or devices can view or access the transaction log files, but they cannot modify, delete, or overwrite the transaction log files. This can prevent or reduce the risk of tampering, alteration, or destruction of the auditable information, and it can also ensure the integrity, the accuracy, or the reliability of the auditable information.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 197;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 354
After acquiring the latest security updates, what must be done before deploying to production systems?
Use tools to detect missing system patches
Install the patches on a test system
Subscribe to notifications for vulnerabilities
Assess the severity of the situation
After acquiring the latest security updates, the best practice is to install the patches on a test system before deploying them to the production systems. This is to ensure that the patches are compatible with the system configuration and do not cause any adverse effects or conflicts with the existing applications or services. The test system should be isolated from the production environment and should have the same or similar specifications and settings as the production system.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 336;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 297
A security architect plans to reference a Mandatory Access Control (MAC) model for implementation. This indicates that which of the following properties are being prioritized?
Confidentiality
Integrity
Availability
Accessibility
According to the CISSP Official (ISC)2 Practice Tests, the property that is prioritized by a Mandatory Access Control (MAC) model for implementation is confidentiality. Confidentiality is the property that ensures that the data or information is only accessible or disclosed to the authorized parties, and is protected from unauthorized or unintended access or disclosure. A MAC model is a type of access control model that grants or denies access to an object based on the security labels of the subject and the object, and the security policy enforced by the system. A security label is a tag or a marker that indicates the classification, sensitivity, or clearance of the subject or the object, such as top secret, secret, or confidential. A security policy is a set of rules or criteria that defines how the access decisions are made based on the security labels, such as the Bell-LaPadula model or the Biba model. A MAC model prioritizes confidentiality, as it ensures that the data or information is only accessible or disclosed to the subjects that have the appropriate security labels and clearance, and that the data or information is not leaked or compromised by the subjects that have lower security labels or clearance. Integrity is not the property that is prioritized by a MAC model for implementation, although it may be a property that is supported or enhanced by a MAC model. Integrity is the property that ensures that the data or information is accurate, complete, and consistent, and is protected from unauthorized or unintended modification or corruption. A MAC model may support or enhance integrity, as it ensures that the data or information is only modified or corrupted by the subjects that have the appropriate security labels and clearance, and that the data or information is not altered or damaged by the subjects that have lower security labels or clearance. However, a MAC model does not prioritize integrity, as it does not prevent or detect the modification or corruption of the data or information by the subjects that have the same or higher security labels or clearance, or by the external factors or events, such as errors, failures, or accidents. Availability is not the property that is prioritized by a MAC model for implementation, although it may be a property that is supported or enhanced by a MAC model. Availability is the property that ensures that the data or information is accessible and usable by the authorized parties, and is protected from unauthorized or unintended denial or disruption of access or use. A MAC model may support or enhance availability, as it ensures that the data or information is accessible and usable by the subjects that have the appropriate security labels and clearance, and that the data or information is not denied or disrupted by the subjects that have lower security labels or clearance. However, a MAC model does not prioritize availability, as it does not prevent or detect the denial or disruption of access or use of the data or information by the subjects that have the same or higher security labels or clearance, or by the external factors or events, such as attacks, failures, or disasters. Accessibility is not the property that is prioritized by a MAC model for implementation, as it is not a security property, but a usability property. Accessibility is the property that ensures that the data or information is accessible and usable by the users with different abilities, needs, or preferences, such as the users with disabilities, impairments, or limitations. Accessibility is not a security property, as it does not protect the data or information from unauthorized or unintended access, disclosure, modification, corruption, denial, or disruption. Accessibility is a usability property, as it enhances the user experience and satisfaction of the data or information.Â
A company was ranked as high in the following National Institute of Standards and Technology (NIST) functions: Protect, Detect, Respond and Recover. However, a low maturity grade was attributed to the Identify function. In which of the following the controls categories does this company need to improve when analyzing its processes individually?
Asset Management, Business Environment, Governance and Risk Assessment
Access Control, Awareness and Training, Data Security and Maintenance
Anomalies and Events, Security Continuous Monitoring and Detection Processes
Recovery Planning, Improvements and Communications
According to the NIST Cybersecurity Framework, the control categories that the company needs to improve when analyzing its processes individually are Asset Management, Business Environment, Governance and Risk Assessment. These control categories are part of the Identify function, which is one of the five core functions of the NIST Cybersecurity Framework. The Identify function is the function that provides the foundational understanding and awareness of the organization’s systems, assets, data, capabilities, and risks, as well as the role and contribution of the organization to the critical infrastructure and the society. The Identify function helps the organization to prioritize and align its cybersecurity activities and resources with its business objectives and requirements, as well as to establish and maintain its cybersecurity policies and standards. The Identify function consists of six control categories, which are the specific outcomes or goals that the organization should achieve for each function. The control categories for the Identify function are:
The company was ranked as high in the following NIST functions: Protect, Detect, Respond and Recover. However, a low maturity grade was attributed to the Identify function. This means that the company has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the other four functions, but it has a low level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Identify function. Therefore, the company needs to improve its processes and controls that are related to the Identify function, which are the Asset Management, Business Environment, Governance, Risk Assessment, Risk Management Strategy, and Supply Chain Risk Management control categories. By improving these control categories, the company can enhance its foundational understanding and awareness of its systems, assets, data, capabilities, and risks, as well as its role and contribution to the critical infrastructure and the society. The company can also better prioritize and align its cybersecurity activities and resources with its business objectives and requirements, as well as establish and maintain its cybersecurity policies and standards. Access Control, Awareness and Training, Data Security and Maintenance are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Protect function, not the Identify function. The Protect function is the function that provides the appropriate safeguards and countermeasures to ensure the delivery of critical services and to limit or contain the impact of potential cybersecurity incidents. The Protect function consists of eight control categories, which are:
The company was ranked as high in the Protect function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Protect function. Therefore, the company does not need to improve its processes and controls that are related to the Protect function, which are the Access Control, Awareness and Training, Data Security, Information Protection Processes and Procedures, Maintenance, and Protective Technology control categories. Anomalies and Events, Security Continuous Monitoring and Detection Processes are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Detect function, not the Identify function. The Detect function is the function that provides the appropriate activities and capabilities to identify the occurrence of a cybersecurity incident in a timely manner. The Detect function consists of three control categories, which are:
The company was ranked as high in the Detect function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Detect function. Therefore, the company does not need to improve its processes and controls that are related to the Detect function, which are the Anomalies and Events, Security Continuous Monitoring, and Detection Processes control categories. Recovery Planning, Improvements and Communications are not the control categories that the company needs to improve when analyzing its processes individually, as they are part of the Recover function, not the Identify function. The Recover function is the function that provides the appropriate activities and capabilities to restore the normal operations and functions of the organization as quickly as possible after a cybersecurity incident, as well as to prevent or reduce the recurrence or impact of future incidents. The Recover function consists of three control categories, which are:
The company was ranked as high in the Recover function, which means that it has a good level of capability and performance in implementing and executing the cybersecurity activities and controls that are related to the Recover function. Therefore, the company does not need to improve its processes and controls that are related to the Recover function, which are the Recovery Planning, Improvements, and Communications control categories.
What is the GREATEST challenge to identifying data leaks?
Available technical tools that enable user activity monitoring.
Documented asset classification policy and clear labeling of assets.
Senior management cooperation in investigating suspicious behavior.
Law enforcement participation to apprehend and interrogate suspects.
 The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets. Data leaks are the unauthorized or accidental disclosure or exposure of sensitive or confidential data, such as personal information, trade secrets, or intellectual property. Data leaks can cause serious damage or harm to the data owner, such as reputation loss, legal liability, or competitive disadvantage. The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets, which means that the organization has defined and implemented the rules and guidelines for categorizing and marking the data according to their sensitivity, value, or criticality. Having a documented asset classification policy and clear labeling of assets can help to identify data leaks by enabling the detection, tracking, and reporting of the data movements, access, or usage, and by alerting the data owner, custodian, or user of any unauthorized or abnormal data activities or incidents. The other options are not the greatest challenges, but rather the benefits or enablers of identifying data leaks. Available technical tools that enable user activity monitoring are not the greatest challenges, but rather the benefits, of identifying data leaks, as they can provide the means or mechanisms for collecting, analyzing, and auditing the data actions or behaviors of the users or devices. Senior management cooperation in investigating suspicious behavior is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the support or authority for conducting the data leak investigation and taking the appropriate actions or measures. Law enforcement participation to apprehend and interrogate suspects is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the assistance or collaboration for pursuing and prosecuting the data leak perpetrators or offenders. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Which of the following are Systems Engineering Life Cycle (SELC) Technical Processes?
Concept, Development, Production, Utilization, Support, Retirement
Stakeholder Requirements Definition, Architectural Design, Implementation, Verification, Operation
Acquisition, Measurement, Configuration Management, Production, Operation, Support
Concept, Requirements, Design, Implementation, Production, Maintenance, Support, Disposal
The Systems Engineering Life Cycle (SELC) Technical Processes are the activities that transform stakeholder needs into a system solution. They include the following five processes: Stakeholder Requirements Definition, Architectural Design, Implementation, Verification, and Operation.
References:
Which of the following is the MOST important goal of information asset valuation?
Developing a consistent and uniform method of controlling access on information assets
Developing appropriate access control policies and guidelines
Assigning a financial value to an organization’s information assets
Determining the appropriate level of protection
 According to the CISSP All-in-One Exam Guide2, the most important goal of information asset valuation is to assign a financial value to an organization’s information assets. Information asset valuation is the process of estimating the worth or importance of the information assets that an organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. Information asset valuation helps the organization to measure the impact and return of its information assets, as well as to determine the appropriate level of protection, investment, and management for them. Information asset valuation also helps the organization to comply with the legal, regulatory, and contractual obligations that may require the disclosure or reporting of the value of its information assets. Developing a consistent and uniform method of controlling access on information assets is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. Controlling access on information assets is the process of granting or denying the rights and permissions to access, use, modify, or disclose the information assets, based on the identity, role, or need of the users or processes. Controlling access on information assets helps the organization to protect the confidentiality, integrity, and availability of its information assets, as well as to enforce the security policies and standards for them. Developing appropriate access control policies and guidelines is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. Access control policies and guidelines are the documents that define the rules, principles, and procedures for controlling access on information assets, as well as the roles and responsibilities of the stakeholders involved. Access control policies and guidelines help the organization to establish and communicate the expectations and requirements for controlling access on information assets, as well as to monitor and audit the compliance and effectiveness of the access control mechanisms. Determining the appropriate level of protection is not the most important goal of information asset valuation, although it may be a benefit or outcome of it. The level of protection is the degree or extent of the security measures and controls that are applied to the information assets, to prevent or mitigate the potential threats and risks that may affect them. The level of protection should be proportional to the value and sensitivity of the information assets, as well as the impact and likelihood of the threats and risks. References: 2
Are companies legally required to report all data breaches?
No, different jurisdictions have different rules.
No, not if the data is encrypted.
No, companies' codes of ethics don't require it.
No, only if the breach had a material impact.
Companies are not legally required to report all data breaches, as different jurisdictions have different rules and regulations regarding data breach notification. For example, in the European Union, the General Data Protection Regulation (GDPR) requires companies to report data breaches that pose a risk to the rights and freedoms of individuals within 72 hours of becoming aware of the breach. In the United States, there is no federal law that mandates data breach notification, but most states have their own laws that vary in terms of the definition, scope, and timing of data breach notification.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 36;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 32
Secure Sockets Layer (SSL) encryption protects
data at rest.
the source IP address.
data transmitted.
data availability.
SSL encryption is used to secure communications over computer networks by encrypting data transmitted between two systems—usually your computer and a server.
References:
When planning a penetration test, the tester will be MOST interested in which information?
Places to install back doors
The main network access points
Job application handouts and tours
Exploits that can attack weaknesses
When planning a penetration test, the tester will be most interested in the exploits that can attack the weaknesses of the target system or network. Exploits are the techniques or tools that take advantage of the vulnerabilities to compromise the security or functionality of the system or network. The tester will use the exploits to simulate a real attack and test the effectiveness of the security controls and defenses.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 424;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 378
Which one of the following activities would present a significant security risk to organizations when employing a Virtual Private Network (VPN) solution?
VPN bandwidth
Simultaneous connection to other networks
Users with Internet Protocol (IP) addressing conflicts
Remote users with administrative rights
According to the CISSP For Dummies4, the activity that would present a significant security risk to organizations when employing a VPN solution is simultaneous connection to other networks. A VPN is a technology that creates a secure and encrypted tunnel over a public or untrusted network, such as the internet, to connect remote users or sites to the organization’s private network, such as the intranet. A VPN provides security and privacy for the data and communication that are transmitted over the tunnel, as well as access to the network resources and services that are available on the private network. However, a VPN also introduces some security risks and challenges, such as configuration errors, authentication issues, malware infections, or data leakage. One of the security risks of a VPN is simultaneous connection to other networks, which occurs when a VPN user connects to the organization’s private network and another network at the same time, such as a home network, a public Wi-Fi network, or a malicious network. This creates a potential vulnerability or backdoor for the attackers to access or compromise the organization’s private network, by exploiting the weaker security or lower trust of the other network. Therefore, the organization should implement and enforce policies and controls to prevent or restrict the simultaneous connection to other networks when using a VPN solution. VPN bandwidth is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that affects the performance and availability of the VPN solution. VPN bandwidth is the amount of data that can be transmitted or received over the VPN tunnel per unit of time, which depends on the speed and capacity of the network connection, the encryption and compression methods, the traffic load, and the network congestion. VPN bandwidth may limit the quality and efficiency of the data and communication that are transmitted over the VPN tunnel, but it does not directly pose a significant security risk to the organization’s private network. Users with IP addressing conflicts is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that causes errors and disruptions in the VPN solution. IP addressing conflicts occur when two or more devices or hosts on the same network have the same IP address, which is a unique identifier that is assigned to each device or host to communicate over the network.
Which of the following could elicit a Denial of Service (DoS) attack against a credential management system?
Delayed revocation or destruction of credentials
Modification of Certificate Revocation List
Unauthorized renewal or re-issuance
Token use after decommissioning
The modification of Certificate Revocation List (CRL) could elicit a Denial of Service (DoS) attack against a credential management system by altering the list of revoked certificates and preventing valid users from accessing the system or allowing invalid users to access the system. A CRL is a list of digital certificates that have been revoked by the issuing Certificate Authority (CA) before their expiration date and should not be trusted.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 216;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 183
A network scan found 50% of the systems with one or more critical vulnerabilities. Which of the following represents the BEST action?
Assess vulnerability risk and program effectiveness.
Assess vulnerability risk and business impact.
Disconnect all systems with critical vulnerabilities.
Disconnect systems with the most number of vulnerabilities.
The best action after finding 50% of the systems with one or more critical vulnerabilities is to assess the vulnerability risk and business impact. This means to evaluate the likelihood and severity of the vulnerabilities being exploited, as well as the potential consequences and costs for the business operations and objectives. This assessment can help prioritize the remediation efforts, allocate the resources, and justify the investments.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 343;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 304
A database administrator is asked by a high-ranking member of management to perform specific changes to the accounting system database. The administrator is specifically instructed to not track or evidence the change in a ticket. Which of the following is the BEST course of action?
Ignore the request and do not perform the change.
Perform the change as requested, and rely on the next audit to detect and report the situation.
Perform the change, but create a change ticket regardless to ensure there is complete traceability.
Inform the audit committee or internal audit directly using the corporate whistleblower process.
According to the CISSP CBK Official Study Guide1, the best course of action for the database administrator in this scenario is to inform the audit committee or internal audit directly using the corporate whistleblower process. A whistleblower is a person who reports or exposes any wrongdoing, fraud, corruption, or illegal activity within an organization to the appropriate authorities or parties. A whistleblower process is a mechanism that enables and protects the whistleblowers from any retaliation or discrimination, and ensures that their reports are handled properly and confidentially. The database administrator should inform the audit committee or internal audit directly using the corporate whistleblower process, as this would demonstrate their professional ethics and responsibility, as well as their compliance with the organizational policies and standards. The database administrator should not ignore the request and do not perform the change, as this would be unprofessional and irresponsible, and may also expose them to potential pressure or threats from the high-ranking member of management. The database administrator should not perform the change as requested, and rely on the next audit to detect and report the situation, as this would be unethical and illegal, and may also compromise the integrity and reliability of the accounting system database. The database administrator should not perform the change, but create a change ticket regardless to ensure there is complete traceability, as this would be dishonest and risky, and may also create a conflict or discrepancy with the high-ranking member of management. References: 1
Which of the following secures web transactions at the Transport Layer?
Secure HyperText Transfer Protocol (S-HTTP)
Secure Sockets Layer (SSL)
Socket Security (SOCKS)
Secure Shell (SSH)
Secure Sockets Layer (SSL) is the only option that secures web transactions at the transport layer of the OSI model. SSL is a protocol or a standard that provides security and privacy for the data or the messages exchanged between a web browser and a web server, or between any two applications that use the TCP/IP protocol. SSL uses cryptographic techniques, such as encryption, decryption, hashing, and digital signatures, to protect the confidentiality, integrity, and authenticity of the data or the messages. SSL also uses certificates and public key infrastructure (PKI) to establish the identity and the trustworthiness of the parties involved in the web transactions.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 215;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 182
Which methodology is recommended for penetration testing to be effective in the development phase of the life-cycle process?
White-box testing
Software fuzz testing
Black-box testing
Visual testing
White-box testing is recommended during the development phase as it involves the examination of the application’s source code and design documents to identify vulnerabilities, ensuring that security is integrated into the development lifecycle. References: CISSP Official (ISC)2 Practice Tests, Chapter 8, page 219
Which of the following explains why record destruction requirements are included in a data retention policy?
To comply with legal and business requirements
To save cost for storage and backup
To meet destruction guidelines
To validate data ownership
Record destruction requirements are included in a data retention policy to ensure that organizations comply with legal and business requirements. Proper disposal of records helps in protecting sensitive information from unauthorized access and also ensures compliance with laws regulating the storage and disposal of data. References: CISSP Official (ISC)2 Practice Tests, Chapter 1, page 32; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 40
In the Software Development Life Cycle (SDLC), maintaining accurate hardware and software inventories is a critical part of
systems integration.
risk management.
quality assurance.
change management.
According to the CISSP CBK Official Study Guide1, the Software Development Life Cycle (SDLC) phase that requires maintaining accurate hardware and software inventories is change management. SDLC is a structured process that is used to design, develop, and test good-quality software. SDLC consists of several phases or stages that cover the entire life cycle of the software, from the initial idea or concept to the final deployment or maintenance of the software. SDLC aims to deliver high-quality, maintainable software that meets the user’s requirements and fits within the budget and schedule of the project. Change management is the process of controlling or managing the changes or modifications that are made to the software or the system during the SDLC, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Change management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the risks or the impacts of the changes or modifications that may affect or impair the software or the system, such as the errors, defects, or vulnerabilities of the software or the system. Maintaining accurate hardware and software inventories is a critical part of change management, as it provides or supports a reliable or consistent source or basis to identify or track the hardware and software components or elements that are involved or included in the software or the system, as well as the changes or modifications that are made to the hardware and software components or elements during the SDLC, such as the name, description, version, status, or value of the hardware and software components or elements of the software or the system. Maintaining accurate hardware and software inventories helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by enabling or facilitating the monitoring, evaluation, or improvement of the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the hardware and software components or elements of the software or the system. Systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Systems integration is the process of combining or integrating the hardware and software components or elements of the software or the system, by using or applying the appropriate methods or mechanisms, such as the interfaces, protocols, or standards of the project. Systems integration helps to ensure the functionality or the interoperability of the software or the system, as well as the compatibility or the consistency of the hardware and software components or elements of the software or the system, by ensuring or verifying that the hardware and software components or elements of the software or the system work or operate together or with other systems or networks, as intended or expected by the user or the client of the software or the system. Systems integration may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the systems integration, by controlling or managing the changes or modifications that are made to the hardware and software components or elements of the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, systems integration is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of systems integration, which is to combine or integrate the hardware and software components or elements of the software or the system. Risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Risk management is the process of identifying, analyzing, evaluating, and treating the risks or the uncertainties that may affect or impair the software or the system, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the project. Risk management helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or minimizing the impact or the consequence of the risks or the uncertainties that may harm or damage the software or the system, such as the threats, attacks, or incidents of the software or the system. Risk management may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the risk management, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, risk management is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of risk management, which is to identify, analyze, evaluate, and treat the risks or the uncertainties of the software or the system. Quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, although it may be a benefit or a consequence of change management. Quality assurance is the process of ensuring or verifying the quality or the performance of the software or the system, by using or applying the appropriate methods or mechanisms, such as the standards, criteria, or metrics of the project. Quality assurance helps to ensure the security or the integrity of the software or the system, as well as the quality or the performance of the software or the system, by preventing or detecting the errors, defects, or vulnerabilities of the software or the system, by using or applying the appropriate methods or mechanisms, such as the testing, validation, or verification of the software or the system. Quality assurance may be a benefit or a consequence of change management, as change management may provide or support a framework or a guideline to perform or conduct the quality assurance, by controlling or managing the changes or modifications that are made to the software or the system, as well as by maintaining accurate hardware and software inventories of the software or the system. However, quality assurance is not the SDLC phase that requires maintaining accurate hardware and software inventories, as it is not the main or the most important objective or purpose of quality assurance, which is to ensure or verify the quality or the performance of the software or the system.
Which of the following would BEST describe the role directly responsible for data within an organization?
Data custodian
Information owner
Database administrator
Quality control
According to the CISSP For Dummies, the role that is directly responsible for data within an organization is the information owner. The information owner is the person or role that has the authority and accountability for the data or information that the organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. The information owner is responsible for defining the classification, value, and sensitivity of the data or information, as well as the security requirements, policies, and standards for the data or information. The information owner is also responsible for granting or revoking the access rights and permissions to the data or information, as well as for monitoring and auditing the compliance and effectiveness of the security controls and mechanisms for the data or information. The data custodian is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner. The data custodian is the person or role that has the responsibility for implementing and maintaining the security controls and mechanisms for the data or information, as defined by the information owner. The data custodian is responsible for performing the technical and operational tasks and activities for the data or information, such as backup, recovery, encryption, or disposal. The database administrator is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner or the data custodian. The database administrator is the person or role that has the responsibility for managing and administering the database system that stores and processes the data or information. The database administrator is responsible for performing the technical and operational tasks and activities for the database system, such as installation, configuration, optimization, or troubleshooting.
From a cryptographic perspective, the service of non-repudiation includes which of the following features?
Validity of digital certificates
Validity of the authorization rules
Proof of authenticity of the message
Proof of integrity of the message
the service of non-repudiation from a cryptographic perspective includes the proof of integrity of the message. This means that non-repudiation is a service that ensures that the sender of a message cannot deny sending it, and the receiver of a message cannot deny receiving it, by providing evidence that the message has not been altered or tampered with during transmission. Non-repudiation can be achieved by using digital signatures and certificates, which are cryptographic techniques that bind the identity of the sender to the content of the message, and verify that the message has not been modified. Non-repudiation does not include the validity of digital certificates, as this is a service that ensures that the certificates are authentic, current, and trustworthy, by checking their expiration dates, revocation status, and issuing authorities. Non-repudiation does not include the validity of the authorization rules, as this is a service that ensures that the access to a resource is granted or denied based on the policies and permissions defined by the owner or administrator. Non-repudiation does not include the proof of authenticity of the message, as this is a service that ensures that the message comes from the claimed sender, by verifying their identity and credentials.Â
Which of the following standards/guidelines requires an Information Security Management System (ISMS) to be defined?
International Organization for Standardization (ISO) 27000 family
Information Technology Infrastructure Library (ITIL)
Payment Card Industry Data Security Standard (PCIDSS)
ISO/IEC 20000
The International Organization for Standardization (ISO) 27000 family of standards/guidelines requires an Information Security Management System (ISMS) to be defined. An ISMS is a systematic approach to managing the security of information assets, such as data, systems, processes, and people. An ISMS includes policies, procedures, controls, and activities that aim to protect the confidentiality, integrity, and availability of information, as well as to comply with the legal and regulatory requirements. The ISO 27000 family provides best practices and guidance for establishing, implementing, maintaining, and improving an ISMS. The ISO 27001 standard specifies the requirements for an ISMS, while the other standards in the family provide more detailed or specific guidance on different aspects of information security34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 23; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 25.
Which of the following BEST describes a chosen plaintext attack?
The cryptanalyst can generate ciphertext from arbitrary text.
The cryptanalyst examines the communication being sent back and forth.
The cryptanalyst can choose the key and algorithm to mount the attack.
The cryptanalyst is presented with the ciphertext from which the original message is determined.
According to the CISSP CBK Official Study Guide, a chosen plaintext attack is a type of cryptanalysis that allows the cryptanalyst to generate ciphertext from arbitrary text. A cryptanalysis is the process of breaking or analyzing a cryptographic system or algorithm, by finding the plaintext, the key, or the algorithm from the ciphertext, or by exploiting the weaknesses or vulnerabilities of the system or algorithm. A chosen plaintext attack is a scenario where the cryptanalyst has access to the encryption function or device, and can choose any plaintext and obtain the corresponding ciphertext. A chosen plaintext attack can help the cryptanalyst to deduce the key or the algorithm, or to create a codebook or a dictionary that maps the plaintext to the ciphertext. The cryptanalyst does not examine the communication being sent back and forth, as this would be a ciphertext-only attack, where the cryptanalyst only has access to the ciphertext, and tries to infer the plaintext, the key, or the algorithm from the statistical or linguistic analysis of the ciphertext. The cryptanalyst does not choose the key and algorithm to mount the attack, as this would be a known plaintext attack, where the cryptanalyst has access to some pairs of plaintext and ciphertext that are encrypted with the same key and algorithm, and tries to find the key or the algorithm from the correlation or pattern between the plaintext and the ciphertext. The cryptanalyst is not presented with the ciphertext from which the original message is determined, as this would be a decryption problem, where the cryptanalyst has access to the ciphertext and the key or the algorithm, and tries to recover the plaintext from the ciphertext.
Discretionary Access Control (DAC) restricts access according to
data classification labeling.
page views within an application.
authorizations granted to the user.
management accreditation.
Discretionary Access Control (DAC) restricts access according to authorizations granted to the user. DAC is a type of access control that allows the owner or creator of a resource to decide who can access it and what level of access they can have. DAC uses access control lists (ACLs) to assign permissions to resources, and users can pass or change their permissions to other users
An organization has developed a major application that has undergone accreditation testing. After receiving the results of the evaluation, what is the final step before the application can be accredited?
Acceptance of risk by the authorizing official
Remediation of vulnerabilities
Adoption of standardized policies and procedures
Approval of the System Security Plan (SSP)
The final step before the application can be accredited is the acceptance of risk by the authorizing official, who is responsible for making the final decision on whether to authorize the operation of the system or not. The authorizing official must review the results of the evaluation, the System Security Plan (SSP), and the residual risks, and determine if the risks are acceptable or not. The other options are not the final step, but rather part of the accreditation process. Remediation of vulnerabilities is done before the evaluation, adoption of standardized policies and procedures is done during the development, and approval of the SSP is done by the system owner, not the authorizing official. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 245; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, p. 63.
Which of the following roles has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization?
Data Custodian
Data Owner
Data Creator
Data User
The role that has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
If compromised, which of the following would lead to the exploitation of multiple virtual machines?
Virtual device drivers
Virtual machine monitor
Virtual machine instance
Virtual machine file system
 If compromised, the virtual machine monitor would lead to the exploitation of multiple virtual machines. The virtual machine monitor, also known as the hypervisor, is the software layer that creates and manages the virtual machines on a physical host. The virtual machine monitor controls the allocation and distribution of the hardware resources, such as CPU, memory, disk, and network, among the virtual machines. The virtual machine monitor also provides the isolation and separation of the virtual machines from each other and from the physical host. If the virtual machine monitor is compromised, the attacker can gain access to all the virtual machines and their data, as well as the physical host and its resources.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 269;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 234
Regarding asset security and appropriate retention, which of the following INITIAL top three areas are important to focus on?
Security control baselines, access controls, employee awareness and training
Human resources, asset management, production management
Supply chain lead-time, inventory control, and encryption
Polygraphs, crime statistics, forensics
 Regarding asset security and appropriate retention, the initial top three areas that are important to focus on are security control baselines, access controls, employee awareness and training. Asset security and appropriate retention are the processes of identifying, classifying, protecting, and disposing of the assets of an organization, such as data, systems, devices, or facilities. Asset security and appropriate retention can help prevent or reduce the loss, theft, damage, or misuse of the assets, as well as comply with the legal and regulatory requirements. The initial top three areas that can help achieve asset security and appropriate retention are:
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, pp. 61-62;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 163-164.
Which of the following BEST describes the purpose of performing security certification?
To identify system threats, vulnerabilities, and acceptable level of risk
To formalize the confirmation of compliance to security policies and standards
To formalize the confirmation of completed risk mitigation and risk analysis
To verify that system architecture and interconnections with other systems are effectively implemented
 The best description of the purpose of performing security certification is to formalize the confirmation of compliance to security policies and standards. Security certification is the process of evaluating and validating the security posture and compliance of a system or network against a set of predefined criteria, such as security policies, standards, regulations, or best practices. Security certification results in a formal statement or document that attests the level of security and compliance achieved by the system or network.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 147;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3, page 123
An organization regularly conducts its own penetration tests. Which of the following scenarios MUST be covered for the test to be effective?
Third-party vendor with access to the system
System administrator access compromised
Internal attacker with access to the system
Internal user accidentally accessing data
According to the CXL blog1, the scenario that must be covered for the penetration test to be effective is the third-party vendor with access to the system. A third-party vendor is an external entity or organization that provides a service or a product to the organization, such as a software developer, a cloud provider, or a payment processor. A third-party vendor with access to the system is a potential source of vulnerability or risk for the organization, as it may introduce or expose some weaknesses or flaws in the system, such as the configuration, the authentication, or the encryption of the system. A third-party vendor with access to the system may also be a target or a vector of attack for the malicious users or hackers, as it may be compromised or exploited to gain unauthorized or unintended access to the system, or to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. Therefore, the scenario of the third-party vendor with access to the system must be covered for the penetration test to be effective, as it helps to identify and assess the security gaps or issues that may arise from the third-party vendor’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. System administrator access compromised is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. A system administrator is an internal entity or person that manages and maintains the system, such as the network, the server, or the database of the organization. A system administrator access compromised is a scenario in which the system administrator’s account or credentials are stolen, hacked, or misused by the malicious users or hackers, who can then access or use the system with the system administrator’s privileges or permissions, such as creating, modifying, or deleting the users, the data, or the settings of the system. A system administrator access compromised is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the system administrator’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, a system administrator access compromised is not the scenario that must be covered for the penetration test to be effective, as it is not a common or realistic scenario that occurs in the real world, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal attacker is an internal entity or person that performs malicious actions or activities on the system, such as an employee, a contractor, or a partner of the organization. An internal attacker with access to the system is a scenario in which the internal attacker uses their legitimate or illegitimate access to the system to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. An internal attacker with access to the system is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal attacker’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal user is an internal entity or person that uses the system for legitimate purposes or functions, such as an employee, a contractor, or a partner of the organization. An internal user accidentally accessing data is a scenario in which the internal user unintentionally or mistakenly accesses or views the data or information on the system that they are not supposed to access or view, such as the confidential, sensitive, or personal data or information of the organization or the customers. An internal user accidentally accessing data is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal user’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, as it is not a malicious or intentional scenario that poses a serious threat or risk to the system, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. References: 1
Which of the following controls is the FIRST step in protecting privacy in an information system?
Data Redaction
Data Minimization
Data Encryption
Data Storage
The first step in protecting privacy in an information system is data minimization. Data minimization is the principle and practice of collecting and processing only the minimum amount and type of data that is necessary and relevant for the intended purpose, and retaining the data only for the required duration. Data minimization reduces the risk and impact of data breaches, as well as the cost and complexity of data protection.
References:Â CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 83;Â Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 79
Internet Protocol (IP) source address spoofing is used to defeat
address-based authentication.
Address Resolution Protocol (ARP).
Reverse Address Resolution Protocol (RARP).
Transmission Control Protocol (TCP) hijacking.
 Internet Protocol (IP) source address spoofing is used to defeat address-based authentication, which is a method of verifying the identity of a user or a system based on their IP address. IP source address spoofing involves forging the IP header of a packet to make it appear as if it came from a trusted or authorized source, and bypassing the authentication check. IP source address spoofing can be used for various malicious purposes, such as denial-of-service attacks, man-in-the-middle attacks, or session hijacking34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5274: CISSP For Dummies, 7th Edition, Chapter 5, page 153.
Which of the following would be the FIRST step to take when implementing a patch management program?
Perform automatic deployment of patches.
Monitor for vulnerabilities and threats.
Prioritize vulnerability remediation.
Create a system inventory.
 The first step to take when implementing a patch management program is to create a system inventory. A system inventory is a comprehensive list of all the hardware and software assets in the organization, such as servers, workstations, laptops, mobile devices, routers, switches, firewalls, operating systems, applications, firmware, etc. A system inventory helps to identify the scope and complexity of the patch management program, as well as the current patch status and vulnerabilities of each asset. A system inventory also helps to prioritize and schedule patch deployment, monitor patch compliance, and report patch performance56. References: 5: Patch Management Best Practices76: Patch Management Process8
Copyright provides protection for which of the following?
Ideas expressed in literary works
A particular expression of an idea
New and non-obvious inventions
Discoveries of natural phenomena
Copyright is a form of intellectual property that grants the author or creator of an original work the exclusive right to reproduce, distribute, perform, display, or license the work. Copyright does not protect ideas, concepts, facts, discoveries, or methods, but only the particular expression of an idea in a tangible medium, such as a book, a song, a painting, or a software program12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 2872: CISSP For Dummies, 7th Edition, Chapter 3, page 87.
An auditor carrying out a compliance audit requests passwords that are encrypted in the system to verify that the passwords are compliant with policy. Which of the following is the BEST response to the auditor?
Provide the encrypted passwords and analysis tools to the auditor for analysis.
Analyze the encrypted passwords for the auditor and show them the results.
Demonstrate that non-compliant passwords cannot be created in the system.
Demonstrate that non-compliant passwords cannot be encrypted in the system.
 The best response to the auditor is to demonstrate that the system enforces the password policy and does not allow non-compliant passwords to be created. This way, the auditor can verify the compliance without compromising the confidentiality or integrity of the encrypted passwords. Providing the encrypted passwords and analysis tools to the auditor (A) may expose the passwords to unauthorized access or modification. Analyzing the encrypted passwords for the auditor and showing them the results (B) may not be sufficient to convince the auditor of the compliance, as the results could be manipulated or falsified. Demonstrating that non-compliant passwords cannot be encrypted in the system (D) is not a valid response, as encryption does not depend on the compliance of the passwords. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 241; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, page 303.
Which one of the following describes granularity?
Maximum number of entries available in an Access Control List (ACL)
Fineness to which a trusted system can authenticate users
Number of violations divided by the number of total accesses
Fineness to which an access control system can be adjusted
Granularity is the degree of detail or precision that an access control system can provide. A granular access control system can specify different levels of access for different users, groups, resources, or conditions. For example, a granular firewall can allow or deny traffic based on the source, destination, port, protocol, time, or other criteria
Which of the following Disaster Recovery (DR) sites is the MOST difficult to test?
Hot site
Cold site
Warm site
Mobile site
A cold site is a backup facility with little or no hardware equipment installed. It is the most cost-effective option among the three disaster recovery sites, but it takes a lot of time to properly set it up and resume business operations. Therefore, testing a cold site is the most difficult and time-consuming task.
Which one of the following security mechanisms provides the BEST way to restrict the execution of privileged procedures?
Role Based Access Control (RBAC)
Biometric access control
Federated Identity Management (IdM)
Application hardening
Role Based Access Control (RBAC) is the security mechanism that provides the best way to restrict the execution of privileged procedures. Privileged procedures are the actions or commands that require higher or special permissions or privileges to perform, such as changing system settings, installing software, or accessing sensitive data. RBAC is a security model that assigns permissions and privileges to roles, rather than to individual users. Roles are defined based on the functions or responsibilities of the users in an organization. Users are assigned to roles based on their qualifications or credentials. RBAC enforces the principle of least privilege, which means that users only have the minimum permissions and privileges necessary to perform their tasks. RBAC also simplifies the administration and management of access control, as it reduces the complexity and redundancy of assigning permissions and privileges to individual users. RBAC is not the same as biometric access control, federated identity management, or application hardening. Biometric access control is a security mechanism that uses physical or behavioral characteristics of the users, such as fingerprints, iris patterns, or voice recognition, to authenticate and authorize them. Federated identity management is a security mechanism that enables the sharing and recognition of identity information across different organizations or domains, using standards and protocols such as SAML, OAuth, or OpenID. Application hardening is a security mechanism that involves the modification or improvement of an application’s code, design, or configuration, to make it more resistant to attacks or vulnerabilities.Â
Which of the following is an authentication protocol in which a new random number is generated uniquely for each login session?
Challenge Handshake Authentication Protocol (CHAP)
Point-to-Point Protocol (PPP)
Extensible Authentication Protocol (EAP)
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol in which a new random number is generated uniquely for each login session. CHAP is used to authenticate a user or a system over a Point-to-Point Protocol (PPP) connection, such as a dial-up or a VPN connection. CHAP works as follows: The server sends a challenge message to the client, which contains a random number. The client calculates a response by applying a one-way hash function to the random number and its own secret key, and sends the response back to the server. The server performs the same calculation using the same random number and the secret key stored in its database, and compares the results. If they match, the authentication is successful. CHAP provides more security than Password Authentication Protocol (PAP), which sends the username and password in clear text over the network . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 516. : CISSP For Dummies, 7th Edition, Chapter 5, page 151.
The birthday attack is MOST effective against which one of the following cipher technologies?
Chaining block encryption
Asymmetric cryptography
Cryptographic hash
Streaming cryptography
The birthday attack is most effective against cryptographic hash, which is one of the cipher technologies. A cryptographic hash is a function that takes an input of any size and produces an output of a fixed size, called a hash or a digest, that represents the input. A cryptographic hash has several properties, such as being one-way, collision-resistant, and deterministic3. A birthday attack is a type of brute-force attack that exploits the mathematical phenomenon known as the birthday paradox, which states that in a set of randomly chosen elements, there is a high probability that some pair of elements will have the same value. A birthday attack can be used to find collisions in a cryptographic hash, which means finding two different inputs that produce the same hash. Finding collisions can compromise the integrity or the security of the hash, as it can allow an attacker to forge or modify the input without changing the hash. Chaining block encryption, asymmetric cryptography, and streaming cryptography are not as vulnerable to the birthday attack, as they are different types of encryption algorithms that use keys and ciphers to transform the input into an output. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 3, page 133. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 143.
What would be the PRIMARY concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system?
Physical access to the electronic hardware
Regularly scheduled maintenance process
Availability of the network connection
Processing delays
 The primary concern when designing and coordinating a security assessment for an Automatic Teller Machine (ATM) system is the availability of the network connection. An ATM system relies on a network connection to communicate with the bank’s servers and process the transactions of the customers. If the network connection is disrupted, degraded, or compromised, the ATM system may not be able to function properly, or may expose the customers’ data or money to unauthorized access or theft. Therefore, a security assessment for an ATM system should focus on ensuring that the network connection is reliable, resilient, and secure, and that there are backup or alternative solutions in case of network failure12. References: 1: ATM Security: Best Practices for Automated Teller Machines32: ATM Security: A Comprehensive Guide4
When constructing an Information Protection Policy (IPP), it is important that the stated rules are necessary, adequate, and
flexible.
confidential.
focused.
achievable.
 An Information Protection Policy (IPP) is a document that defines the objectives, scope, roles, responsibilities, and rules for protecting the information assets of an organization. An IPP should be aligned with the business goals and legal requirements, and should be communicated and enforced throughout the organization. When constructing an IPP, it is important that the stated rules are necessary, adequate, and achievable, meaning that they are relevant, sufficient, and realistic for the organization’s context and capabilities34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 234: CISSP For Dummies, 7th Edition, Chapter 1, page 15.
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to what?
Interface with the Public Key Infrastructure (PKI)
Improve the quality of security software
Prevent Denial of Service (DoS) attacks
Establish a secure initial state
Including a Trusted Platform Module (TPM) in the design of a computer system is an example of a technique to establish a secure initial state. A TPM is a hardware device that provides cryptographic functions and secure storage for keys, certificates, passwords, and other sensitive data. A TPM can also measure and verify the integrity of the system components, such as the BIOS, boot loader, operating system, and applications, before they are executed. This process is known as trusted boot or measured boot, and it ensures that the system is in a known and trusted state before allowing access to the user or network. A TPM can also enable features such as disk encryption, remote attestation, and platform authentication12. References: 1: What is a Trusted Platform Module (TPM)?32: Trusted Platform Module (TPM) Fundamentals4
During an audit of system management, auditors find that the system administrator has not been trained. What actions need to be taken at once to ensure the integrity of systems?
A review of hiring policies and methods of verification of new employees
A review of all departmental procedures
A review of all training procedures to be undertaken
A review of all systems by an experienced administrator
During an audit of system management, if auditors find that the system administrator has not been trained, the immediate action that needs to be taken to ensure the integrity of systems is a review of all systems by an experienced administrator. This is to verify that the systems are configured, maintained, and secured properly, and that there are no errors, vulnerabilities, or breaches that could compromise the system’s availability, confidentiality, or integrity. A review of hiring policies, departmental procedures, or training procedures are not urgent actions, as they are more related to the long-term improvement of the system management process, rather than the current state of the systems . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 829. : CISSP For Dummies, 7th Edition, Chapter 8, page 267.
Why MUST a Kerberos server be well protected from unauthorized access?
It contains the keys of all clients.
It always operates at root privilege.
It contains all the tickets for services.
It contains the Internet Protocol (IP) address of all network entities.
 A Kerberos server must be well protected from unauthorized access because it contains the keys of all clients. Kerberos is a network authentication protocol that uses symmetric cryptography and a trusted third party, called the Key Distribution Center (KDC), to provide secure and mutual authentication between clients and servers2. The KDC consists of two components: the Authentication Server (AS) and the Ticket Granting Server (TGS). The AS issues a Ticket Granting Ticket (TGT) to the client after verifying its identity and password. The TGS issues a service ticket to the client after validating its TGT and the requested service. The client then uses the service ticket to access the service. The KDC stores the keys of all clients and services in its database, and uses them to encrypt and decrypt the tickets. If an attacker gains access to the KDC, they can compromise the keys and the tickets, and impersonate any client or service on the network. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 91.
Which of the following assessment metrics is BEST used to understand a system's vulnerability to potential exploits?
Determining the probability that the system functions safely during any time period
Quantifying the system's available services
Identifying the number of security flaws within the system
Measuring the system's integrity in the presence of failure
Identifying the number of security flaws within the system is the best assessment metric to understand a system’s vulnerability to potential exploits. A security flaw is a weakness or a defect in the system’s design, implementation, or operation that could be exploited by an attacker to compromise the system’s confidentiality, integrity, or availability2. By identifying the number of security flaws within the system, the assessor can measure the system’s vulnerability, which is the degree to which the system is susceptible or exposed to attacks3. Determining the probability that the system functions safely during any time period, quantifying the system’s available services, and measuring the system’s integrity in the presence of failure are not assessment metrics that directly relate to the system’s vulnerability to potential exploits, as they are more concerned with the system’s reliability, availability, and resilience. References: 2: CISSP For Dummies, 7th Edition, Chapter 8, page 2173: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 461.
A system has been scanned for vulnerabilities and has been found to contain a number of communication ports that have been opened without authority. To which of the following might this system have been subjected?
Trojan horse
Denial of Service (DoS)
Spoofing
Man-in-the-Middle (MITM)
A trojan horse is a type of malware that masquerades as a legitimate program or file, but performs malicious actions in the background. A trojan horse may open unauthorized ports on the infected system, allowing remote access or communication by the attacker or other malware12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6432: CISSP For Dummies, 7th Edition, Chapter 6, page 189.
A security consultant has been asked to research an organization's legal obligations to protect privacy-related information. What kind of reading material is MOST relevant to this project?
The organization's current security policies concerning privacy issues
Privacy-related regulations enforced by governing bodies applicable to the organization
Privacy best practices published by recognized security standards organizations
Organizational procedures designed to protect privacy information
The most relevant reading material for researching an organization’s legal obligations to protect privacy-related information is the privacy-related regulations enforced by governing bodies applicable to the organization. These regulations define the legal requirements, standards, and penalties for collecting, processing, storing, and disclosing personal or sensitive information of individuals or entities. The organization must comply with these regulations to avoid legal liabilities, fines, or sanctions. The other options are not as relevant as privacy-related regulations, as they either do not reflect the legal obligations of the organization (A and C), or do not apply to all types of privacy-related information (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 22; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31.
An organization is designing a large enterprise-wide document repository system. They plan to have several different classification level areas with increasing levels of controls. The BEST way to ensure document confidentiality in the repository is to
encrypt the contents of the repository and document any exceptions to that requirement.
utilize Intrusion Detection System (IDS) set drop connections if too many requests for documents are detected.
keep individuals with access to high security areas from saving those documents into lower security areas.
require individuals with access to the system to sign Non-Disclosure Agreements (NDA).
 The best way to ensure document confidentiality in the repository is to encrypt the contents of the repository and document any exceptions to that requirement. Encryption is the process of transforming the information into an unreadable form using a secret key or algorithm. Encryption protects the confidentiality of the information by preventing unauthorized access or disclosure, even if the repository is compromised or breached. Encryption also provides integrity and authenticity of the information by ensuring that it has not been modified or tampered with. Documenting any exceptions to the encryption requirement is also important to justify the reasons and risks for not encrypting certain information, and to apply alternative controls if needed93. References: 9: What Is a Document Repository and What Are the Benefits of Using One103: What is a document repository and why you should have one11
Which of the following is the best practice for testing a Business Continuity Plan (BCP)?
Test before the IT Audit
Test when environment changes
Test after installation of security patches
Test after implementation of system patches
The best practice for testing a Business Continuity Plan (BCP) is to test it when the environment changes, such as when there are new business processes, technologies, threats, or regulations. This ensures that the BCP is updated, relevant, and effective for the current situation. Testing the BCP before the IT audit, after installation of security patches, or after implementation of system patches are not the best practices, as they may not reflect the actual changes in the business environment or the potential disruptions that may occur. References: 5: Comprehensive Guide to Business Continuity Testing67: Maximizing Your BCP Testing Efforts: Best Practices8
Which one of the following is the MOST important in designing a biometric access system if it is essential that no one other than authorized individuals are admitted?
False Acceptance Rate (FAR)
False Rejection Rate (FRR)
Crossover Error Rate (CER)
Rejection Error Rate
The most important factor in designing a biometric access system if it is essential that no one other than authorized individuals are admitted is the False Acceptance Rate (FAR). FAR is the probability that a biometric system will incorrectly accept an unauthorized user or reject an authorized user2. FAR is a measure of the security or accuracy of the biometric system, and it should be as low as possible to prevent unauthorized access. False Rejection Rate (FRR), Crossover Error Rate (CER), and Rejection Error Rate are not as important as FAR, as they are related to the usability or convenience of the biometric system, rather than the security. FRR is the probability that a biometric system will incorrectly reject an authorized user or accept an unauthorized user. CER is the point where FAR and FRR are equal, and it is used to compare the performance of different biometric systems. Rejection Error Rate is the probability that a biometric system will fail to capture or process a biometric sample. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 95.
The stringency of an Information Technology (IT) security assessment will be determined by the
system's past security record.
size of the system's database.
sensitivity of the system's datA.
age of the system.
 The stringency of an Information Technology (IT) security assessment will be determined by the sensitivity of the system’s data, as this reflects the level of risk and impact that a security breach could have on the organization and its stakeholders. The more sensitive the data, the more stringent the security assessment should be, as it should cover more aspects of the system, use more rigorous methods and tools, and provide more detailed and accurate results and recommendations. The system’s past security record, size of the system’s database, and age of the system are not the main factors that determine the stringency of the security assessment, as they do not directly relate to the value and importance of the data that the system processes, stores, or transmits . References: 3: Common Criteria for Information Technology Security Evaluation 4: Information technology security assessment - Wikipedia
Which of the following statements is TRUE for point-to-point microwave transmissions?
They are not subject to interception due to encryption.
Interception only depends on signal strength.
They are too highly multiplexed for meaningful interception.
They are subject to interception by an antenna within proximity.
They are subject to interception by an antenna within proximity. Point-to-point microwave transmissions are line-of-sight media, which means that they can be intercepted by any antenna that is in the direct path of the signal. The interception does not depend on encryption, multiplexing, or signal strength, as long as the antenna is close enough to receive the signal.
What principle requires that changes to the plaintext affect many parts of the ciphertext?
Diffusion
Encapsulation
Obfuscation
Permutation
Diffusion is the principle that requires that changes to the plaintext affect many parts of the ciphertext. Diffusion is a property of a good encryption algorithm that aims to spread the influence of each plaintext bit over many ciphertext bits, so that a small change in the plaintext results in a large change in the ciphertext2. Diffusion can increase the security of the encryption by making it harder for an attacker to analyze the statistical patterns or correlations between the plaintext and the ciphertext. Encapsulation, obfuscation, and permutation are not principles that require that changes to the plaintext affect many parts of the ciphertext, as they are related to different aspects of encryption or security. References: 2: CISSP For Dummies, 7th Edition, Chapter 3, page 65.
Which of the following is a method used to prevent Structured Query Language (SQL) injection attacks?
Data compression
Data classification
Data warehousing
Data validation
 Data validation is a method used to prevent Structured Query Language (SQL) injection attacks, which are a type of web application attack that exploit the input fields of a web form to inject malicious SQL commands into the underlying database. Data validation involves checking the input data for any illegal or unexpected characters, such as quotes, semicolons, or keywords, and rejecting or sanitizing them before passing them to the database34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6604: CISSP For Dummies, 7th Edition, Chapter 6, page 199.
Which of the following is a security feature of Global Systems for Mobile Communications (GSM)?
It uses a Subscriber Identity Module (SIM) for authentication.
It uses encrypting techniques for all communications.
The radio spectrum is divided with multiple frequency carriers.
The signal is difficult to read as it provides end-to-end encryption.
 A security feature of Global Systems for Mobile Communications (GSM) is that it uses a Subscriber Identity Module (SIM) for authentication. A SIM is a smart card that contains the subscriber’s identity, phone number, network information, and encryption keys. The SIM is inserted into the mobile device and communicates with the network to authenticate the subscriber and establish a secure connection. The SIM also stores the subscriber’s contacts, messages, and preferences. The SIM provides security by preventing unauthorized access to the subscriber’s account and data, and by allowing the subscriber to easily switch devices without losing their information12. References: 1: GSM - Security and Encryption32: Introduction to GSM security
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
Which security action should be taken FIRST when computer personnel are terminated from their jobs?
Remove their computer access
Require them to turn in their badge
Conduct an exit interview
Reduce their physical access level to the facility
The first security action that should be taken when computer personnel are terminated from their jobs is to remove their computer access. Computer access is the ability to log in, use, or modify the computer systems, networks, or data of the organization3. Removing computer access can prevent the terminated personnel from accessing or harming the organization’s information assets, or from stealing or leaking sensitive or confidential data. Removing computer access can also reduce the risk of insider threats, such as sabotage, fraud, or espionage. Requiring them to turn in their badge, conducting an exit interview, and reducing their physical access level to the facility are also important security actions that should be taken when computer personnel are terminated from their jobs, but they are not as urgent or critical as removing their computer access. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 249.
The type of authorized interactions a subject can have with an object is
control.
permission.
procedure.
protocol.
Permission is the type of authorized interactions a subject can have with an object. Permission is a rule or a setting that defines the specific actions or operations that a subject can perform on an object, such as read, write, execute, or delete1. Permission is usually granted by the owner or the administrator of the object, and can be based on the identity, role, or group membership of the subject. Control, procedure, and protocol are not types of authorized interactions a subject can have with an object, as they are related to different aspects of access control or security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 355.
The three PRIMARY requirements for a penetration test are
A defined goal, limited time period, and approval of management
A general objective, unlimited time, and approval of the network administrator
An objective statement, disclosed methodology, and fixed cost
A stated objective, liability waiver, and disclosed methodology
The three primary requirements for a penetration test are a defined goal, a limited time period, and an approval of management. A penetration test is a type of security assessment that simulates a malicious attack on an information system or network, with the permission of the owner, to identify and exploit vulnerabilities and evaluate the security posture of the system or network. A penetration test requires a defined goal, which is the specific objective or scope of the test, such as testing a particular system, network, application, or function. A penetration test also requires a limited time period, which is the duration or deadline of the test, such as a few hours, days, or weeks. A penetration test also requires an approval of management, which is the formal authorization and consent from the senior management of the organization that owns the system or network to be tested, as well as the management of the organization that conducts the test. A general objective, unlimited time, and approval of the network administrator are not the primary requirements for a penetration test, as they may not provide a clear and realistic direction, scope, and authorization for the test.
The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks provide
data integrity.
defense in depth.
data availability.
non-repudiation.
 Defense in depth is a security strategy that involves applying multiple layers of protection to a system or network to prevent or mitigate attacks. The use of strong authentication, the encryption of Personally Identifiable Information (PII) on database servers, application security reviews, and the encryption of data transmitted across networks are examples of defense in depth measures that can enhance the security of the system or network.
A, C, and D are incorrect because they are not the best terms to describe the security strategy. Data integrity is a property of data that ensures its accuracy, consistency, and validity. Data availability is a property of data that ensures its accessibility and usability. Non-repudiation is a property of data that ensures its authenticity and accountability. While these properties are important for security, they are not the same as defense in depth.
As one component of a physical security system, an Electronic Access Control (EAC) token is BEST known for its ability to
overcome the problems of key assignments.
monitor the opening of windows and doors.
trigger alarms when intruders are detected.
lock down a facility during an emergency.
An Electronic Access Control (EAC) token is best known for its ability to overcome the problems of key assignments in a physical security system. An EAC token is a device that can be used to authenticate a user or grant access to a physical area or resource, such as a door, a gate, or a locker2. An EAC token can be a smart card, a magnetic stripe card, a proximity card, a key fob, or a biometric device. An EAC token can overcome the problems of key assignments, which are the issues or challenges of managing and distributing physical keys to authorized users, such as lost, stolen, duplicated, or unreturned keys. An EAC token can provide more security, convenience, and flexibility than a physical key, as it can be easily activated, deactivated, or replaced, and it can also store additional information or perform other functions. Monitoring the opening of windows and doors, triggering alarms when intruders are detected, and locking down a facility during an emergency are not the abilities that an EAC token is best known for, as they are more related to the functions of other components of a physical security system, such as sensors, alarms, or locks. References: 2: CISSP For Dummies, 7th Edition, Chapter 9, page 253.
Which one of these risk factors would be the LEAST important consideration in choosing a building site for a new computer facility?
Vulnerability to crime
Adjacent buildings and businesses
Proximity to an airline flight path
Vulnerability to natural disasters
Proximity to an airline flight path is the least important consideration in choosing a building site for a new computer facility, as it poses the lowest risk factor compared to the other options. Proximity to an airline flight path may cause some noise or interference issues, but it is unlikely to result in a major disaster or damage to the computer facility, unless there is a rare case of a plane crash or a terrorist attack3. Vulnerability to crime, adjacent buildings and businesses, and vulnerability to natural disasters are more important considerations in choosing a building site for a new computer facility, as they can pose significant threats to the physical security, availability, and integrity of the facility and its assets. Vulnerability to crime can expose the facility to theft, vandalism, or sabotage. Adjacent buildings and businesses can affect the fire safety, power supply, or environmental conditions of the facility. Vulnerability to natural disasters can cause the facility to suffer from floods, earthquakes, storms, or fires. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 10, page 543.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the
confidentiality of the traffic is protected.
opportunity to sniff network traffic exists.
opportunity for device identity spoofing is eliminated.
storage devices are protected against availability attacks.
 By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the opportunity to sniff network traffic exists. A SAN is a dedicated network that connects storage devices, such as disk arrays, tape libraries, or servers, to provide high-speed data access and transfer. A SAN may use different protocols or technologies to communicate with storage devices, such as Fibre Channel, iSCSI, or NFS. By allowing storage communications to run on top of TCP/IP, a common network protocol that supports internet and intranet communications, a SAN may leverage the existing network infrastructure and reduce costs and complexity. However, this also exposes the storage communications to the same risks and threats that affect the network communications, such as sniffing, spoofing, or denial-of-service attacks. Sniffing is the act of capturing or monitoring network traffic, which may reveal sensitive or confidential information, such as passwords, encryption keys, or data. By allowing storage communications to run on top of TCP/IP with a SAN, the confidentiality of the traffic is not protected, unless encryption or other security measures are applied. The opportunity for device identity spoofing is not eliminated, as an attacker may still impersonate a legitimate storage device or server by using a forged or stolen IP address or MAC address. The storage devices are not protected against availability attacks, as an attacker may still disrupt or overload the network or the storage devices by sending malicious or excessive packets or requests.
The PRIMARY purpose of a security awareness program is to
ensure that everyone understands the organization's policies and procedures.
communicate that access to information will be granted on a need-to-know basis.
warn all users that access to all systems will be monitored on a daily basis.
comply with regulations related to data and information protection.
The primary purpose of a security awareness program is to ensure that everyone understands the organization’s policies and procedures related to information security. A security awareness program is a set of activities, materials, or events that aim to educate and inform the employees, contractors, partners, and customers of the organization about the security goals, principles, and practices of the organization1. A security awareness program can help to create a security culture, improve the security behavior, and reduce the human errors or risks. Communicating that access to information will be granted on a need-to-know basis, warning all users that access to all systems will be monitored on a daily basis, and complying with regulations related to data and information protection are not the primary purposes of a security awareness program, as they are more specific or secondary objectives that may be part of the program, but not the main goal. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 28.
Which of the following statements is TRUE of black box testing?
Only the functional specifications are known to the test planner.
Only the source code and the design documents are known to the test planner.
Only the source code and functional specifications are known to the test planner.
Only the design documents and the functional specifications are known to the test planner.
Black box testing is a method of software testing that does not require any knowledge of the internal structure or code of the software1. The test planner only knows the functional specifications, which describe what the software is supposed to do, and tests the software based on the expected inputs and outputs. Black box testing is useful for finding errors in the functionality, usability, or performance of the software, but it cannot detect errors in the code or design. White box testing, on the other hand, requires the test planner to have access to the source code and the design documents, and tests the software based on the internal logic and structure2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21, page 13132: CISSP For Dummies, 7th Edition, Chapter 8, page 215.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
 Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
 When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
 An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
 When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
 When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
 When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
TESTED 23 Nov 2024