CCNA Cyber Ops – SECFND Exam topic answers

So here we have SECFND Exam topic answers, you can read them and easily pass you exam because it’s basically exam topics of SECFND provided by CISCO. If you have another kind of help you can comment.



SECFND Exam topic answers



1.0 Network Concepts

1.1 Describe the function of the network layers as specified by the OSI and the TCP/IP network models.

1.2 Describe the operation of the following

1.2.a From Cisco: IP From Wikipedia: IP
1.2.b From Cisco: TCP From Wikipedia: TCP
1.2.c UDP
1.2.d ICMP




1.3 Describe the operation of these network services

1.3.a From Cisco: ARP From Wikipedia: ARP
1.3.b From Cisco: DNS From Wikipedia: DNS
1.3.c DHCP



1.4 Describe the basic operation of these network device types

1.4.a From Cisco: Router From Wikipedia: Router
1.4.b From Cisco: Switch From Wikipedia: Switch
1.4.c Hub
1.4.d Bridge
1.4.e Wireless access point (WAP)
1.4.f Wireless LAN controller (WLC)

1.5 Describe the functions of these network security systems as deployed on the host, network, or the cloud:

1.5.a Firewall: A firewall is a network security device that monitors incoming and outgoing network traffic and decides whether to allow or block specific traffic based on a defined set of security rules.
1.5.b Cisco Intrusion Prevention System (IPS): An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. (Generic IPS)
1.5.c Cisco Advanced Malware Protection (AMP): Malware, short for “malicious software,” refers to a type of computer program designed to infect a legitimate user’s computer and inflict harm on it in multiple ways. Malware can infect computers and devices in several ways and comes in a number of forms, just a few of which include viruses, worms, Trojans, spyware, or any type of malicious code that infiltrates a computer. To find more information about the Cisco AMP click here. Cisco AMP is a next-generation endpoint security software that prevent breaches and continuously monitor all file behavior to uncover stealthy attacks. Detect, block, and remediate advanced malware across all endpoints.
1.5.d Web Security Appliance (WSA): A security appliance is any form of server appliance that is designed to protect computer networks from unwanted traffic. Cisco Cloud Web Security (CWS): As a cloud-delivered web proxy, our Cloud Web Security product provides security and control for the distributed enterprise across one of the top attack vectors: the web. Users are protected on any device and in any location through Cisco worldwide threat intelligence and advanced threat defense capabilities.
1.5.e Email Security Appliance (ESA): Cisco Email Security protects against ransomware, business email compromise, spoofing, and phishing. Cisco Cloud Email Security (CES)

1.6 Describe IP subnets and communication within an IP subnet and between IP subnets

1.7 Describe the relationship between VLAN‘s and data visibility: When properly configured, VLAN segmentation severely hinders access to system attack surfaces. It reduces packet-sniffing capabilities and increases threat agent effort. Finally, authorized users only “see” the servers and other devices necessary to perform their daily tasks. (See an example of data visibility from the security point of view here)

1.8 Describe the operation of ACLs applied as packet filters on the interfaces of network devices: Access lists filter network traffic by controlling whether routed packets are forwarded or blocked at the router’s interfaces.

1.9 Compare and contrast deep packet inspection (Deep packet inspection (DPI) provides the ability to look into the packet past the basic header information. DPI intelligently determines the contents of a particular packet, and then either records that information for statistical purposes or performs an action on the packet) with packet filtering (Packet filtering is a firewall technique used to control network access by monitoring outgoing and incoming packets and allowing them to pass or halt based on the source and destination Internet Protocol (IP) addresses, protocols and ports) and stateful firewall operation(tracks the operating state and characteristics of network connections traversing it. The firewall is configured to distinguish legitimate packets for different types of connections. Only packets matching a known active connection are allowed to pass the firewall)

1.10 Compare and contrast inline traffic interrogation (An inline tool passes live traffic directly through a tool to process the live traffic before it is forwarded on to its final destination) and taps (A network TAP is a simple device that connects directly to the cabling infrastructure to split or copy packets for use in analysis, security, or general network management) or traffic mirroring (SPAN, Switch Port ANalyzer, is a software function of a switch or router that duplicates traffic from incoming or outgoing ports and forwards the copied traffic to a special SPAN, or sometimes called mirror, port)

1.11 Compare and contrast the characteristics of data obtained from taps or traffic mirroring and NetFlowin the analysis of network traffic. (IEEE paper on NetFlow)

1.12 Identify potential data loss from provided traffic profiles: an inline tool, such as Intrusion Prevention Systems (IPS), can drop or even add packets into the production network. Since it is running as an inline application, a tool failure could be devastating
and bring down the entire system.

Note: Not sure if “Data Loss” means the potential problems with the monitoring or the data lost to unauthorize users. The Following paragraph was taken from the Cisco Cloud Security 1.0, Design Guide =>Chapter: End-To-End Visibility

Detecting Data Loss
Data loss describes the loss of critical business data to unauthorized users. Data loss typically involves a data breach and back end transmission of sensitive data such as credit-card data, patient or financial information. Detecting data loss is imperative for implementing security controls for various compliance regimes such as PCI DSS and HIPAA. However, data loss incidents are unintentionally undetectable.

Data loss incidents normally involve asymmetrical outbound flows, in which outbound flows significantly outweigh a few inbound packets. Cisco CTD can trigger data loss alarms on such conditions. NetFlow generated flows contain flow direction, so Cisco CTD can leverage NetFlow generated flows and trigger data loss alarms on asymmetrical flows.

2.0 Security Concepts

2.1 Describe the principles of the defense in depth strategy: Defense in depth is the coordinated use of multiple security countermeasures to protect the integrity of the information assets in an enterprise. The strategy is based on the military principle that it is more difficult for an enemy to defeat a complex and multi-layered defense system than to penetrate a single barrier. Defense in depth can be divided into three areas: Physical, Technical, and Administrative.

Physical controls are anything that physically limits or prevents access to IT systems. Fences, guards, dogs, and CCTV systems.

Technical controls are hardware or software whose purpose is to protect systems and resources. Examples of technical controls would be disk encryption, fingerprint readers, and Windows Active Directory. Hardware technical controls differ from physical controls in that they prevent access to the contents of a system, but not the physical systems themselves.

Administrative controls are an organization’s policies and procedures. Their purpose is to ensure that there is proper guidance available in regards to security and that regulations are met. They include things such as hiring practices, data handling procedures, and security requirements.

2.2 Compare and contrast these concepts

  • 2.2.a Risk: the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to the organization. It is measured in terms of a combination of the probability of occurrence of an event and its consequence.
    • Risk = Likelihood * Impact
  • 2.2.b Threat: In computer security, a threat is a possible danger that might exploit a vulnerability to breach security and therefore cause possible harm.
  • 2.2.c Vulnerability: In computer security, a vulnerability is a weakness which allows an attacker to reduce a system’s information assurance. A vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.
  • 2.2.d Exploit: An exploit is a piece of software, a chunk of data, or a sequence of commands that takes advantage of a bug or vulnerability in order to cause an unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). Such behavior frequently includes things like gaining control of a computer system, allowing privilege escalation, or a denial-of-service (DoS or related DDoS) attack.

2.3 Describe these terms

  • 2.3.a Threat actor: A threat actor, or malicious actor, is a person or entity that is responsible for an event or incident that impacts, or has the potential to impact, the safety or security of another entity. Most often, the term is used to describe the individuals and groups that perform malicious acts against organizations of various types and sizes. From a threat intelligence perspective, threat actors are often categorized as unintentional or intentional and external or internal.
  • 2.3.b Run book automation (RBA): Runbook automation (RBA) is the ability to define, build, orchestrate, manage, and report on workflows that support system and network operational processes. A runbook workflow can potentially interact with all types of infrastructure elements, such as applications, databases, and hardware.
  • 2.3.c Chain of custody (evidentiary): Chain of custody (CoC), in legal contexts, refers to the chronological documentation or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of physical or electronic evidence. It is essential that any items of evidence can be traced from the crime scene to the courtroom, and everywhere in between. This known as maintaining the ‘chain of custody’ or ‘continuity of evidence. You must have the ability to prove that a particular piece of evidence was at a particular place, at a particular time and in a particular condition. This applies to the physical hardware as well as the information being retrieved from that hardware. If the chain of custody is broken, the forensic investigation may be fatally compromised. This is where proper management of the evidence is important.
  • 2.3.d Reverse engineering: Reverse engineering is taking apart an object to see how it works in order to duplicate or enhance the object. The practice, taken from older industries, is now frequently used in computer hardware and software. Software reverse engineering involves reversing a program’s machine code (the string of 0s and 1s that are sent to the logic processor) back into the source code that it was written in, using program language statements.
  • 2.3.e Sliding window anomaly detection: The time span used to collect data to build your traffic profile is called the profiling time window (PTW). The PTW is a sliding window; that is, if your PTW is one week (the default), your traffic profile includes connection data collected over the last week. You can change the PTW to be as short as an hour or as long as several weeks. A traffic profile is based on connection data collected over a time span that you specify. `After you create a traffic profile, you can detect abnormal network traffic by evaluating new traffic against your profile, which presumably represents normal network traffic.
  • 2.3.f PII: Personally identifiable information (PII), or sensitive personal information (SPI), as used in information security and privacy laws, is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context.
  • 2.3.g PHI: Protected health information (PHI) under US law is any information about health status, provision of healthcare, or payment for health care that is created or collected by a “Covered Entity” (or a Business Associate of a Covered Entity), and can be linked to a specific individual.

2.4 Describe these security terms

  • 2.4.a Principle of least privilege: In information security, computer science, and other fields, the principle of least privilege (also known as the principle of minimal privilege or the principle of least authority) requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose.
  • 2.4.b Risk scoring/risk weighting: First, gather information about the threat agent involved, the attack that will be used, the vulnerability involved, and the impact of a successful exploit on the business. Then, assign a score or weight to the risk, this value will be used in the risk assessment.
  • 2.4.c Risk reduction: The application of one or more measures to reduce the likelihood of an unwanted occurrence and/or lessen its consequences.
  • 2.4.d Risk assessment: is the process of assessing the probabilities and consequences of risk events if they are realized. The results of this assessment are then used to prioritize risks to establish a most-to-least-critical importance ranking. Ranking risks in terms of their criticality or importance providesinsights to the project’s management on where resources may be needed to manage or mitigate the realization of high probability/high consequence risk events.

2.5 Compare and contrast these access control models: Access control is basically identifying a person doing a specific job, authenticating them by looking at their identification, then giving that person only the key to the door or computer that they need access to and nothing more. In the world of information security, one would look at this as granting an individual permission to get onto a network via a username and password, allowing them access to files, computers, or other hardware or software the person requires, and ensuring they have the right level of permission (i.e. read only) to do their job.

  • 2.5.a Discretionary access control: this access control model is based on a user’s discretion. The owner of the resource can give access rights to that resource to other users based on his discretion.
  • 2.5.b Mandatory access control: In this Model, users/owners do not enjoy the privilege of deciding who can access their files. In this model, the operating system is the decision maker overriding the user’s wishes. Every Subject (users) and Object (resources) are classified and assigned a security label. The security labels of the subject and the object along with the security policy determine if the subject can access the object. The rules for how subjects access objects are made by the security officer, configured by the administrator, enforced by the operating system, and supported by security technologies.
  • 2.5.d Nondiscretionary access control: The Role Based Access Control (RBAC) model provides access control based on the subject’s role in the organization. So, instead of assigning John permissions as a security manager, the position of security manager already has permissions assigned to it.

2.6 Compare and contrast these terms

  • 2.6.a Network and host antivirus: A Network antivirus prevent unknown programs and processes from accessing the system. A host antivirus is computer software used to prevent, detect and remove malicious software once it reached a system.
  • 2.6.b Agentless and agent-based protections: Agentless monitoring is deployed in one of two ways: Using a remote API exposed by the platform or service being monitored or directly analyzing network packets flowing between service components. In either, there is no special deployment of agents required. In agent-based protection, the monitoring endpoint requires an installation of the software agent. Monitoring with agents has the cost of installation, configuration (proportionate to the number of managed elements), platform support needs and dependencies. You also need to worry about patching.
  • 2.6.c Security Information and Event Management (SIEM) and Log Collection: SIEM provides real-time analysis of security alerts generated by network hardware and applications. In log collection, the events from the assets on the network, such as servers, switches, routers, storage arrays, operating systems, and firewalls are saved to a location for further analysis.
  • 2.6.d Log management (LM): comprises an approach to dealing with large volumes of computer-generated log messages (also known as audit records, audit trails, event-logs, etc.). Log Management generally covers:
    • Log collection
    • Centralized log aggregation
    • Long-term log storage and retention
    • Log rotation
    • Log analysis (in real-time and in bulk after storage)
    • Log search and reporting.

2.7 Describe these concepts

  • 2.7.a Asset management (ITAM): It is the set of business practices that join financial, contractual and inventory functions to support life cycle management and strategic decision making for the IT environment. Assets include all elements of software and hardware that are found in the business environment.
  • 2.7.b Configuration management: It is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional, and physical attributes with its requirements, design, and operational information throughout its life. Attackers are looking for systems that have default settings that are immediately vulnerable. Once an attacker exploits a system, they start making changes. These two reasons are why Security Configuration Management (SCM) is so important. SCM can not only identify misconfigurations that make your systems vulnerable but also identify “unusual” changes to critical files or registry keys.
  • 2.7.c Mobile device management: Mobile device management (MDM) is an industry term for the administration of mobile devices, such as smartphones, tablet computers, laptops and desktop computers. MDM is usually implemented with the use of a third party product that has management features for particular vendors of mobile devices. Mobile Device Management (MDM) servers secure, monitor, manage and support mobile devices deployed across mobile operators, service providers, and enterprises. MDM servers consist of a policy server that controls the use of some applications on a mobile device (for example, an e-mail application) in the deployed environment. However, the network is the only entity that can provide granular access to endpoints based on ACLs, SGTs, etc. To do its job, Cisco ISE queries the MDM servers for the necessary device attributes to ensure it is then able to provide network access control for those devices.
    mobile-cisco-ise
  • 2.7.d Patch management: A patch is a piece of software designed to update a computer program or its supporting data, to fix or improve it. This includes fixing security vulnerabilities and other bugs, with such patches usually called bugfixes or bug fixes, and improving the usability or performance. Patch management is a strategy for managing patches or upgrades for software applications and technologies. A patch management plan can help a business or organization handle these changes efficiently. (Patch Management Example for Windows)
    om
  • 2.7.e Vulnerability management: In computer security, a vulnerability is a weakness which allows an attacker to reduce a system’s information assurance. Vulnerability management is the “cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities”, especially in software and firmware. Vulnerability management is integral to computer security and network security.

3.0 Cryptography

3.1 Describe the uses of a hash algorithm

hash function is any function that can be used to map data of arbitrary size to data of fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes.

cryptographic hash function is a special class of hash function that has certain properties which make it suitable for use in cryptography. It is a mathematical algorithm that maps data of arbitrary size to a bit string of a fixed size (a hash function) which is designed to also be a one-way function, that is, a function which is infeasible to invert. The only way to recreate the input data from an ideal cryptographic hash function’s output is to attempt a brute-force search of possible inputs to see if they produce a match. The input data is often called the message, and the output (the hash value or hash) is often called the message digest or simply the digest.

3.2 Describe the uses of encryption algorithms

Cryptographic hash functions have many information-security applications, notably in digital signatures, message authentication codes (MACs), and other forms of authentication. They can also be used as ordinary hash functions, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Indeed, in information-security contexts, cryptographic hash values are sometimes called (digital) fingerprints, checksums, or just hash values, even though all these terms stand for more general functions with rather different properties and purposes.

3.3 Compare and contrast symmetric and asymmetric encryption algorithms: Symmetric-key algorithms are algorithms for cryptography that use the same cryptographic keys for encryption of plaintext and decryption of ciphertext. Public key cryptography, or asymmetric cryptography, is any cryptographic system that uses pairs of keys: public keys which may be disseminated widely, and private keys which are known only to the owner. This accomplishes two functions: authentication, which is when the public key is used to verify that a holder of the paired private key sent the message, and encryption, whereby only the holder of the paired private key can decrypt the message encrypted with the public key.

3.4 Describe the processes of digital signature creation and verification

A digital signature is a mathematical scheme for demonstrating the authenticity of digital messages or documents. A valid digital signature gives a recipient reason to believe that the message was created by a known sender (authentication), that the sender cannot deny having sent the message (non-repudiation), and that the message was not altered in transit (integrity).

Digital signatures are based on public key cryptography, also known as asymmetric cryptography. Using a public key algorithm such as RSA, one can generate two keys that are mathematically linked: one private and one public. To create a digital signature, signing software (such as an email program) creates a one-way hash of the electronic data to be signed. The private key is then used to encrypt the hash. The encrypted hash — along with other information, such as the hashing algorithm — is the digital signature. The reason for encrypting the hash instead of the entire message or document is that a hash function can convert an arbitrary input into a fixed length value, which is usually much shorter. This saves time since hashing is much faster than signing.

ss_digitalsignature_2014_v01

The value of the hash is unique to the hashed data. Any change in the data, even changing or deleting a single character, results in a different value. This attribute enables others to validate the integrity of the data by using the signer’s public key to decrypt the hash. If the decrypted hash matches a second computed hash of the same data, it proves that the data hasn’t changed since it was signed. If the two hashes don’t match, the data has either been tampered with in some way (integrity) or the signature was created with a private key that doesn’t correspond to the public key presented by the signer (authentication).

3.5 Describe the operation of a PKI

A public key infrastructure (PKI) is a set of roles, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates and manage public-key encryption.

3.6 Describe the security impact of these commonly used hash algorithms

  • 3.6.a MD5: The MD5 algorithm is a widely used hash function producing a 128-bit hash value. Although MD5 was initially designed to be used as a cryptographic hash function, it has been found to suffer from extensive vulnerabilities. It can still be used as a checksum to verify data integrity, but only against unintentional corruption.
  • 3.6.b SHA-1: Secure Hash Algorithm 1 is a cryptographic hash function designed by the United States National Security Agency and is a U.S. Federal Information Processing Standard published by the United States NIST. SHA-1 produces a 160-bit (20-byte) hash value known as a message digest. A SHA-1 hash value is typically rendered as a hexadecimal number, 40 digits long. SHA-1 is no longer considered secure against well-funded opponents.
  • 3.6.c SHA-2: Secure Hash Algorithm 2  is a set of cryptographic hash functions designed by the National Security Agency (NSA). SHA-2 includes significant changes from its predecessor, SHA-1. The SHA-2 family consists of six hash functions with digests (hash values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, SHA-512/256.
    • 3.6.c.1 SHA-256
    • 3.6.c.2 SHA-512
  • 3.6.d SHA-3

3.7 Describe the security impact of these commonly used encryption algorithms and secure communications protocols

  • 3.7.a DES: Data Encryption Standard is a symmetric-key algorithm for the encryption of electronic data. Although now considered insecure, it was highly influential in the advancement of modern cryptography.
  • 3.7.b 3DES: Triple DES, officially the Triple Data Encryption Algorithm (TDEA or Triple DEA), is a symmetric-key block cipher, which applies the Data Encryption Standard (DES) cipher algorithm three times to each data block.
  • 3.7.c AES: The Advanced Encryption Standard, also known by its original name Rijndael, is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001. AES is based on a design principle known as a substitution-permutation network, a combination of both substitution and permutation, and is fast in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael which has a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, the Rijndael specification per se is specified with block and key sizes that may be any multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits.
  • 3.7.d AES256-CTR: AES256 is a symmetrical encryption algorithm that has become ubiquitous, due to the acceptance of the algorithm by the U.S. and Canadian governments as standards for encrypting transited data and data at rest. Because of the length of the key (256 bits) and the number of hashes (14), it takes a murderously long time for a malware hacker to perform a dictionary attack.

Block cipher mode of operation: (ECB, CBC, OFB, CTR and CFB) In cryptography, a mode of operation is an algorithm that uses a block cipher to provide an information service such as confidentiality or authenticity.

  • 3.7.e RSA: RSA is one of the first practical public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the encryption key is public and differs from the decryption key which is kept secret. In RSA, this asymmetry is based on the practical difficulty of factoring the product of two large prime numbers, the factoring problem. RSA is made of the initial letters of the surnames of Ron Rivest, Adi Shamir, and Leonard Adleman, who first publicly described the algorithm in 1977.
  • 3.7.f DSA: The Digital Signature Algorithm (DSA) is a Federal Information Processing Standard for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Standard (DSS) and adopted as FIPS 186 in 1993.
  • 3.7.g SSH: Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. The best known example application is for remote login to computer systems by users.
  • 3.7.h SSL/TLS: Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), both frequently referred to as “SSL”, are cryptographic protocols that provide communications security over a computer network.

3.8 Describe how the success or failure of a cryptographic exchange impacts security investigation

The key exchange problem is how to exchange whatever keys or other information are needed so that no one else can obtain a copy. Historically, this required trusted couriers, diplomatic bags, or some other secure channel. With the advent of public key / private key cipher algorithms (ie, asymmetric ciphers), the encrypting key (aka, the public key of a pair) could be made public, since (at least for high quality algorithms) no one without the decrypting key (aka, the private key of that pair) could decrypt the message.

In terms of a “security investigation” let’s first take the case of a failed exchange between the authorized parties. If the exchange fails the concepts of authentication, non-reputation, and integrity are affected. Then an investigation can’t take place and also the systems are left vulnerable. If the exchange is successful then there is no problem, but that makes me think that this question could be referring to the attack itself. If the attack is protected and the exchange between the system and the attacker is successful, then the investigation is going to be really hard because the investigator will have limited access to the facts of the attack, like where it comes from or the actual code of the virus, if that were the case.

3.9 Describe these items in regards to SSL/TLS

  • 3.9.a Cipher-suite: is a concept used in Transport Layer Security (TLS) / Secure Sockets Layer (SSL) network protocol. Before TLS version 1.3, a cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings. The format of cipher suites is modified since TLS 1.3. In the current TLS 1.3 draft document, cipher suites are only used to negotiate encryption and HMAC algorithms. When a TLS connection is established, a handshaking, known as the TLS Handshake Protocol, occurs. Within this handshake, a client hello (ClientHello) and a server hello (ServerHello) message are passed. First, the client sends a list of the cipher suites that it supports, in order of preference. Then the server replies with the cipher suite that it has selected from the client’s list. To test which TLS ciphers a server supports, an SSL/TLS Scanner may be used.
  • 3.9.b X.509 certificates: In cryptography, X.509 is an important standard for a public key infrastructure (PKI) to manage digital certificates and public-key encryption and a key part of the Transport Layer Security protocol used to secure both web and email communication. An ITU-T standard, X.509 specifies formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm.
  • 3.9.c Key exchange: Key exchange (also known as “key establishment”) is any method in cryptography by which cryptographic keys are exchanged between two parties, allowing the use of a cryptographic algorithm.
  • 3.9.d Protocol version: TLS 1.0, TLS 1.1, TLS 1.2, TLS 1.3.
  • 3.9.e PKCS: stands for “Public Key Cryptography Standards”. These are a group of public-key cryptography standards devised and published by RSA Security Inc, starting in the early 1990s. The company published the standards to promote the use of the cryptography techniques to which they had patents, such as the RSA algorithm, the Schnorr signature algorithm, and several others.

4.0 Host-Based Analysis

4.1 Define these terms as they pertain to Microsoft Windows

4.1.a Processes: A process is an executing program.
4.1.b Thread: is the basic unit to which the operating system allocates processor time.
4.1.c Memory allocation: The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store.
4.1.d Windows Registry: Windows stores its configuration information in a database called the registry. The registry contains profiles for each user of the computer and information about system hardware, installed programs, and property settings. Windows continually reference this information during its operation.

4.1.e WMI: Windows Management Instrumentation (WMI) is a set of specifications from Microsoft for consolidating the management of devices and applications in a network from Windows computing systems. WMI is the Microsoft implementation of Web-Based Enterprise Management (WBEM), which is built on the Common Information Model (CIM), a computer industry standard for defining device and application characteristics so that system administrators and management programs can control devices and applications from multiple manufacturers or sources in the same way.

4.1.f Handles: An object is a data structure that represents a system resource, such as a file, thread, or graphic image. An application cannot directly access object data or the system resource that an object represents. Instead, an application must obtain an object handle, which it can use to examine or modify the system resource. Each handle has an entry in an internally maintained table. These entries contain the addresses of the resources and the means to identify the resource type.

4.1.g Services: Microsoft Windows services, formerly known as NT services, enable you to create long-running executable applications that run in their own Windows sessions. These services can be automatically started when the computer boots, can be paused and restarted, and do not show any user interface. These features make services ideal for use on a server or whenever you need long-running functionality that does not interfere with other users who are working on the same computer. You can also run services in the security context of a specific user account that is different from the logged-on user or the default computer account. For more information about services and Windows sessions, see the Windows SDK documentation in the MSDN Library. A Windows service is a computer program that operates in the background.

4.2 Define these terms as they pertain to Linux

4.2.a Processes: An instance of a program that is being executed. Each process has a unique PID, which is that process’s entry in the kernel’s process table.
4.2.b Fork: creates a new process by duplicating the calling process. The new process is referred to as the child process. The calling process is referred to as the parent process.
4.2.c Permissions: a system to control the ability of the users and processes to view or make changes to the contents of the filesystem.
4.2.d Symlink: is the nickname for any file that contains a reference to another file or directory in the form of an absolute or relative path and that affects pathname resolution.
4.2.e Daemon: In multitasking computer operating systems, a daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user.

4.3 Describe the functionality of these endpoint technologies in regards to security monitoring

4.3.a Host-based intrusion detection: Intrusion detection (or prevention) software installed on the endpoints as opposed to the network.
4.3.b Antimalware and antivirus: Let’s start with the differences between “viruses” and “malware.” Viruses are a specific type of malware (designed to replicate and spread), while malware is a broad term used to describe all sorts of unwanted or malicious code. Malware can include viruses, spyware, adware, nagware, trojans, worms, and more.
4.3.c Host-based firewall: A host-based firewall is a piece of software running on a single host that can restrict incoming and outgoing network activity for that host only. They can prevent a host from becoming infected and stop infected hosts from spreading malware to other hosts.
4.3.d Application-level whitelisting/blacklisting: In Windows, it is possible to configure two different methods that determine whether an application should be allowed to run. The first method, known as blacklisting, is when you allow all applications to run by default except for those you specifically do not allow. The other and more secure method is called whitelisting, which blocks every application from running by default, except for those you explicitly allow.
4.3.e Systems-based sandboxing (such as Chrome, Java, Adobe reader): Sandboxing is a technique for creating confined execution environments to protect sensitive resources from illegal access. A sandbox, as a container, limits or reduces the level of access its applications have.

4.4 Interpret these operating system log data to identify an event

4.4.a Windows security event logs: Event logs are special files that record significant events on your computer, such as when a user logs on to the computer or when a program encounters an error. Whenever these types of events occur, Windows records the event in an event log that you can read by using Event Viewer.The Security log is designed for use by the system. However, users can read and clear the Security log if they have been granted the SE_SECURITY_NAME privilege (the “manage auditing and security log” user right).
4.4.b Unix-based syslog: Syslog is a way for network devices to send event messages to a logging server – usually known as a Syslog server. The Syslog protocol is supported by a wide range of devices and can be used to log different types of events.

syslog
4.4.c Apache access logs: In order to effectively manage a web server, it is necessary to get feedback about the activity and performance of the server as well as any problems that may be occurring. The Apache HTTP Server provides very comprehensive and flexible logging capabilities.
4.4.d IIS access logs: IIS uses a flexible and efficient logging architecture. When a loggable event, usually an HTTP transaction, occurs, IIS calls the selected logging module, which then writes to one of the logs stored in %SystemRoot%\system32\Logfiles\<service_name>.

5.0 Security Monitoring

5.1 Identify the types of data provided by these technologies

  • 5.1.a TCP Dump: a tool that displays network traffic
  • 5.1.b NetFlow: NetFlow provides valuable information about network users and applications, peak usage times, and traffic routing. The basic output of NetFlow is a flow record.
  • 5.1.c Next-Gen firewall:Cisco Firepower NGFW appliances combine our proven network firewall with the industry’s most effective next-gen IPS and advanced malware protection.
  • 5.1.d Traditional stateful firewall: is a network firewall that tracks the operating state and characteristics of network connections traversing it.
  • 5.1.e Application visibility and control: The Cisco Application Visibility and Control (AVC) solution is a suite of services in Cisco network devices that provides application-level classification, monitoring, and traffic control, to:
    • Improve business-critical application performance
    • Support capacity management and planning
    • Reduce network operating costs
  • 5.1.f Web content filtering: A Web filter is a program that can screen an incoming Web page to determine whether some or all of it should not be displayed to the user. The data here comes in the form of a URL by browsing or a click on a link.
  • 5.1.g Email content filtering: Cisco Email Security protects against ransomware, business email compromise, spoofing, and phishing. It uses advanced threat intelligence and a multilayered approach to protect inbound messages and sensitive outbound data. The data or message here comes in the form of an email.

5.2 Describe these types of data used in security monitoring

  • 5.2.a Full packet capture: A packet consists of control information and user data, which is also known as the payload. Control information provides data for delivering the payload, for example: source and destination network addresses, error detection codes, and sequencing information. Typically, control information is found in packet headers and trailers. Actual packets collected by storing network traffic.
  • 5.2.b Session data: Session data is the summary of the communication between two network devices. Also known as a conversation or a flow, this summary data is one of the most flexible and useful forms of NSM (Network Security Monitoring) data.
  • 5.2.c Transaction data: application-specific records generated from network traffic. Logs deeper connection-level information, which may span multiple packets within a connection. Must have predefined templates for protocol formatting. Common for logging HTTP header/request information, SMTP command data, etc.
  • 5.2.d Statistical data: Overall summaries or profiles of network traffic.
  • 5.2.f Extracted content: Metadata. In a typical NSM deployment, this data would be captured through a network tap or switch. This type of data includes data streams, files, web pages contrary to the full content that would refer to the unfiltered collection of packets.
  • 5.2.g Alert data: Judgments made by tools that inspect network traffic. Typically the result of finely-tuned signatures matching against packet content, and similar in nature to transaction data. This information, rather than being for logging purposes is intended to indicate discrete events which might be attacks.

5.3 Describe these concepts as they relate to security monitoring

  • 5.3.a Access control list (ACL): specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies a subject and an operation. IP ACLs control whether routed packets are forwarded or blocked at the router interface. Your router examines each packet in order to determine whether to forward or drop the packet based on the criteria that you specify within the ACL. A Filesystem ACLs is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files.
  • 5.3.b NAT/PAT: NAT (Network Address Translation) replaces a private IP address with a public IP address, translating the private addresses in the internal private network into legal, routable addresses that can be used on the public Internet. Dynamic Port Address Translation (PAT)—A group of real IP addresses are mapped to a single IP address using a unique source port of that IP address.
  • 5.3.c Tunneling: Tunneling is a technique that enables remote access users to connect to a variety of network resources (Corporate Home Gateways or an Internet Service Provider) through a public data network. In general, tunnels established through the public network are point-to-point (though a multipoint tunnel is possible) and link a remote user to some resource at the far end of the tunnel. Major tunneling protocols (ie: Layer 2 Tunneling Protocol (L2TP), Point to Point Tunneling Protocol (PPTP), and Layer 2 Forwarding (L2F)) encapsulate Layer 2 traffic from the remote user and send it across the public network to the far end of the tunnel where it is de-encapsulated and sent to its destination. The most significant benefit of Tunneling is that it allows for the creation of VPNs over public data networks to provide cost savings for both end users, who do not have to create dedicated networks, and for Service Providers, who can leverage their network investments across many VPN customers.
  • 5.3.d TOR (The Onion Router): Tor aims to conceal its users’ identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation of onion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe.
  • 5.3.e Encryption: is the process of encoding messages or information in such a way that only authorized parties can access it.
  • 5.3.f P2P (Peer to Peer): in computing or networking is a distributed application architecture that partitions tasks or workloads between peers.
  • 5.3.g Encapsulation: is a method of designing modular communication protocols in which logically separate functions in the network are abstracted from their underlying structures by inclusion or information hiding within higher level objects.

ipencap

  • 5.3.h Load balancing: When a router learns multiple routes to a specific network via multiple routing processes (or routing protocols, such as RIP, RIPv2, IGRP, EIGRP, and OSPF), it installs the route with the lowest administrative distance in the routing table. In a more general sense it improves the distribution of workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units, or disk drives.

load-balancing-architecture

5.4 Describe these NextGen IPS event types

  • 5.4.a Connection event: Connection events are the records of any connection that occurs in a monitored network.
  • 5.4.b Intrusion event: When the system recognizes a packet that is potentially malicious.
  • 5.4.c Host or endpoint event: events that happen the endpoints connected to your network.
  • 5.4.d Network discovery event: Discovery events alert you to the activity on your network and provide you with the information you need to respond appropriately. They are triggered by the changes that your managed devices detect in the network segments they monitor.
  • 5.4.e NetFlow event: significant events in the life of a flow, like creation tear-down, and flows denied by an access rule.

5.5 Describe the function of these protocols in the context of security monitoring

  • 5.5.a DNS: is a globally distributed, scalable, hierarchical, and dynamic database that provides a mapping between hostnames, IP addresses (both IPv4 and IPv6), text records, mail exchange information (MX records), name server information (NS records), and security key information defined in Resource Records (RRs). DNS primarily translates hostnames to IP addresses or IP addresses to hostnames. Flaws in the implementation of the DNS protocol allow it to be exploited and used for malicious activities like DOS and DDOS.
  • 5.5.b NTP: Network Time Protocol (NTP) is a protocol designed to time-synchronize devices within a network. It is very valuable to have the correct time settings in the events logging systems, in this way the analysis of the events will be accurate.
  • 5.5.c SMTP/POP/IMAP: The email servers and the way to connect to them influence heavily in the way monitoring and intrusion prevention are configured. The server that provides the service must be hardened and the connection and download method should be secured with the different methods we’ve read through the post.
  • 5.5.d HTTP/HTTPS: The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, and hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. HTTPS (also called HTTP over TLS, HTTP over SSL, and HTTP Secure) is a protocol for secure communication over a computer network which is widely used on the Internet. HTTPS consists of communication over Hypertext Transfer Protocol (HTTP) within a connection encrypted by Transport Layer Security, or its predecessor, Secure Sockets Layer. The main motivation for HTTPS is authentication of the visited website and protection of the privacy and integrity of the exchanged data.

6.0 Attack Methods

6.1 Compare and contrast an attack surface and vulnerability: The attack surface of a software environment is the sum of the different points (the “attack vectors”) where an unauthorized user (the “attacker”) can try to enter data to or extract data from an environment. A vulnerability is a weakness which allows an attacker to reduce a system’s information assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw.

6.2 Describe these network attacks

  • 6.2.a Denial of service: (DoS attack) is a cyber-attack where the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host connected to the Internet.
  • 6.2.b Distributed denial of service: A distributed denial-of-service (DDoS) is a cyber-attack where the perpetrator uses more than one, often thousands of, unique IP addresses.
  • 6.2.c Man-in-the-middle: an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other.

6.3 Describe these web application attacks

  • 6.3.a SQL injection: is a code injection technique, used to attack data-driven applications, in which nefarious SQL statements are inserted into an entry field for execution (e.g. to dump the database contents to the attacker).
  • 6.3.b Command injections: Command injection is an attack in which the goal is the execution of arbitrary commands on the host operating system via a vulnerable application. Command injection attacks are possible when an application passes unsafe user supplied data (forms, cookies, HTTP headers etc.) to a system shell.
  • 6.3.c Cross-site scripting: (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites.

6.4 Describe these attacks

  • 6.4.a Social engineering: An attack based on deceiving end users or administrators at a target site. Social engineering attacks are typically carried out by email or by contacting users by phone and impersonating an authorized user, in an attempt to gain unauthorized access to a system or application.
  • 6.4.b Phishing: Phishing is misrepresentation where the criminal uses social engineering to appear as a trusted identity.
  • 6.4.c Evasion methods: bypassing an information security device in order to deliver an exploit, attack, or another form of malware to a target network or system, without detection.

6.5 Describe these endpoint-based attacks

  • 6.5.a Buffer overflows: is an anomaly where a program, while writing data to a buffer, overruns the buffer’s boundary and overwrites adjacent memory locations.
  • 6.5.b Command and control (C2): the term refers to the influence an attacker has over a compromised computer system that they control.
  • 6.5.c Malware: short for malicious software, is any software used to disrupt computer or mobile operations, gather sensitive information, gain access to private computer systems, or display unwanted advertising.
  • 6.5.d Rootkit: is a collection of computer software, typically malicious, designed to enable access to a computer or areas of its software that would not otherwise be allowed (for example, to an unauthorized user) and often masks its existence or the existence of other software.
  • 6.5.e Port scanning: probing a server or host for open ports.
  • 6.5.f Host profiling: Identifying groups of Internet hosts with a similar behavior or configuration.

6.6 Describe these evasion methods

  • 6.6.a Encryption and tunneling: One common method of evasion used by attackers is to avoid detection simply by encrypting the packets or putting them in a secure tunnel.
  • 6.6.b Resource exhaustion: A common method of evasion used by attackers is extreme resource consumption, though this subtle method doesn’t matter if such a denial is against the device or the personnel managing the device. Specialized tools can be used to create a large number of alarms that consume the resources of the IPS device and prevent attacks from being logged.
  • 6.6.c Traffic fragmentation: Fragmentation of traffic was one of the early network IPS evasion techniques used to attempt to bypass the network IPS sensor.
  • 6.6.d Protocol-level misinterpretation: Attackers also evade detection by causing the network IPS sensor to misinterpret the end-to-end meaning of network protocols.
  • 6.6.e Traffic substitution and insertion: is when that attacker attempts to substitute payload data with other data in a different format, but the same meaning. A network IPS sensor may miss such malicious payloads if it looks for data in a particular format and doesn’t recognize the true meaning of the data.
  • 6.6.f Pivot: refers to a method used by penetration testers that use the compromised system to attack other systems on the same network to avoid restrictions such as firewall configurations, which may prohibit direct access to all machines.

6.7 Define privilege escalation

Privilege Escalation is the act of exploiting a bug, design flaw or configuration oversight in an operating system or software application to gain elevated access to resources that are normally protected from an application or user.

6.8 Compare and contrast remote exploit and a local exploit

A remote exploit works over a network and exploits the security vulnerability without any prior access to the vulnerable system. A local exploit requires prior access to the vulnerable system and usually increases the privileges of the person running the exploit past those granted by the system administrator.

Well, that is all for now and please, don’t open that link on your inbox if you don’t know who the sender is.

Hamza Arif
Follow us

Hamza Arif

Hey, i hamza arif student of telecommunication from BZU, i am good in Networking, Telecommunication and Web Development working on different projects and try my best to teach them to all of you.
Hamza Arif
Follow us

Leave a comment

Your email address will not be published. Required fields are marked *