Cloud Security

Cloud Service Agreements

a) Acceptable Use Policy (AUP)

- Acceptable Use Policies should be implemented in on-premieses solutions to educate users abaout allowed and prohibited actions that can be taken on those systems. However, they should be used in cloud solutions as well.

- AUPs are commonly part of the CSA. they are used to distnguish what is or is not acceptable behavior while accessing or utilizing the cloud  resources.

- These cloud be used to outline particular prohibited activities, as well as relase the CSP of legal liablity in the case the unlawful actions are carried out in the cloud environment by the customer.

- AUPs often include information regarding violations of the acceptable use policy.

- Many CSPs will terminate the relationship with the customer if the Acceptable Use Policy is violated in a way that may negatively impact the reputation of the CSP.

- The AUP between CSP and the customer is a delicate document, and requires the customer to respect how services are provided.

b) Service Level Agreement (SLA)

- A Service Levvel Agreement (SLA) is a contract that outlines all of the services provided by the CSP to their customer.

- This cloud include items such as:
* Availability
* Serviceability
* Performance

- The SLA would specify thresholds and fanacial repercussions assioted with  not meeting those thresholds.

- Well-designed SLAs will help significantly with resolving conflicts between the provider and customer.

- In order to guarantee an SLA, providers and customers should collect and monitor key metrics.

- The reason that customers must also track these metrics is because CSPs will not usually provide them, unless the customer explicity asks for them. The burden of proof is on the customer if they decide to push against SLA violations.

- SLAs are often non-negotiable documents that strictly limit the liability of the provider. If a CSP is out of compliance, they rarley reimburse the  customer fully for the results of non-compliance.


Roles and Responsibilities

| On-Premise | IAAS | PAAS | SAAS |
<-More Control       More Innovation ->

a) On-Premise

Customer Responsibility
-Data
-Applications
-Infrastracture Applications
-Middleware
-Servers
-Storage
-Networking
-Data Center Operations

b) IAAS

Customer Responsibility
-Data
-Applications
-Infrastracture Applications
-Middleware
-Servers

CSP Responsibility
-Storage
-Networking
-Data Center


c) PAAS

Customer Responsibility
-Data
-Applications

Customer Responsibility & CSP Responsibility
-Infrastracture Applications
-Middleware

CSP Responsibility
-Servers
-Storage
-Networking
-Data Center

d) SAAS

Customer Responsibility
-Data

CSP Responsibility
-Applications
-Infrastracture Applications
-Middleware
-Servers
-Storage
-Networking
-Data Center



The Role of Formal Configuration Management and Inventory Systems

a) Configuration Management 

Developers -> ->
Configuration Managment Process -> Production -> End User
Configuration Managment Process -> Testers


Developers are responsible for understanding new feature requests, as well as bugs in the system or application, and programming the necessary changes.

Once they feel that they've made the necessary changes, they "commit" their code, that is, they send it to the next step in the process.

Before the new code can be considered for deployment to production, it must undergo vigorous testing. Testers carry out several forms of testing to ensure following

* Verification that the system does what it is  supposed to do

* Stress testing, or testing that ensure that the system can handle a larger than normal load.

Once testing has been completed, and the updated code is ready for deployment to production, configuration managers should examine plans:

* For deployment
* For rollback, in case issues arise
* Auditing

The Configuration Manager is responsible for notifying customers as well as creating auditable documentation for the configuration changes to be made.

Where this differs in a cloud environment is that all of this is automated for us.



b) Inventory Management

Goal of Inventory Management
*Helps reduce time in solving issues
*helps reduce time in making changes
*Allows us to keep up with changes
*Allows maximizing the asset utility


Building A Foundation with Accurate Inventory

The overall goal of inventory management is to have a complete and up-to-date inventory of all assets of all system components.

This is critical in configuration management, as we have visibility into all of our  assets. Managing a full inventory of devices and software throughout the application can be a tedious process, especially when  we're making  modifications to some configurations on certain assets. However, as these two go hand-in hand, maintaining these inventories becomes much less of a chore.

In addition, maintaining these inventories assists us when identifying potential problem areas after changes have been made.



3. Securing Innovative Technologies

3.1 Container Security

Traditional Web Application:

Users <-> Web Servers <-> App Servers <-> Databases


Cloud Web Application

User -> DMZ -> internal Network (Web Servers <-> Databases)


Containerized Web Application

Master Image  Container

Users <-> Web Servers <-> App Servers-Containerized <-> Databases



Securing the Image 

One of the key elements that make containers so attractive is the master image  and how it is used. How do we create security around the master image container to ensure we're not pushing  insecure code, settings, etc. to our production containers?

There's a few safeguards that we can put into place:

-Code analysis (both static and dynamic)
-Vulnerability scanning of the master image before and after changes are made
-Access Managements
-Automation also has its own security uses. All steps should be automated as much as reasonably possible. This assists in removing human error, and includes deployment and testing (in addition to the above)

Hardening the host 

Probably one of the most overlooked keys to securing containers is hardening the host. Mis-configurations, or attackers gaining access to the host compromises the security of containers.

Here are a few methods you should consider to harden your hosts:
-Disable unused services, ports and protocols. This limits the attack vectors that a would-be attacker has access to.
-follow a patch managment plan. This addresses known vulnerbilities by the vendor
-Enforce password policies.
-Use encryption.
-Install IDS/IPS systems.

3.2 Machine Learning and Analitycs

Terms Associated with Machine Learning

a) Data Science - This is a concept used to understand big data. In data science, we use information gathered from multiple sources in order to provide accurate predictions and insights to make critical business decisions.

Data Science is the umbrella term that encompasses data analytics, data mining and machine learning.

b) Data Analysis - This is the process that data undergoes, where statistics and logical techniques are systemically applied in order discover useful information to support decision-making. The data can be inspected, cleansed, transformed, and modeled in order to discover this useful information and present it in a way that leads to strategic outcomes.

c) Deep Learning - This is a technique used in machine learning where a machine is able to adequately analyze current patterns, and predict future patterns, based on patterns it has seen in the past, it is referred to as deep learning to represent the number of layers of data transformation that the patterns undergo to adequately understand the information that will be output.

d) Artificial Intelligence - The science of making machines "smart" AI should really be defined as making machines carry out human tasks. This cloud include things like robotic sweepers being able to detect where edges are, or even computer chess programs.

e) Machine Learning - ML is a subset of AI. It uses all of the other concepts we discussed above and applies them to a computer system. The machines is able to tkae input (data from the past) and apply it to current trends in the data, to help organizations make strategic decisions moving  forward.


Cybersecurity Tasks Categories

a) Prediction - makes predictions that will help secure their network is one of the key goals of the tasks performed by cyber security professionals. This cloud include items such as identifying threats, or predicting attacks.

b) Prevention - Using those predictions, cybersecurity experts can then put mechanisms in place that prevent those attacks from being successful against their network.

c) Detection - Being able to quickly detect when an attack has taken place on he system is another hey function of a cybersecurity position.

d) Response - After detecting an attempted attack against the systems, security professionals should have a response plan. This plan. This plan will differ depending on the type of attack encountered.

e) Monitoring - Monitoring should a cybersecurity professional's top priority. Without it, the other four tasks are impossible. Monitoring gives us the ability to understand normal behaviors that we can then use to understand when an attack is happening.


Machine Learning tasks and Cybersecurity

a) Prediction <-> Regression - The knowledge about existing data is used to predict o use for cybersecurity predictions, as we can take existing data and apply it to our own guesses about future events

b) Prevention <-> Classification - is the acct of breaking data into groups based on some characteristic(s) of that data. Spam filters, of instance, use some characteristics of a message and are able to separate spam emails from other messages.

c) Detection <-> Clustering - Classification is very similar to clustering, with one exception. In clustering, we are unsure about the data we expect to see if there's some common characteristics, so that we can begin the classification process. This may be monitoring a specific group of users to see if we can limit their access via group permissions.

d) Response <-> Association Rule - Association rule learning uses a series of algorithms and rules to apply some recommended course of action.  Think of netflix, an how they recommend certian movies or shows based on your  viewing history. If an organization faces many types of attacks and responds similarly when faced with a certain type, the ML system can apply the recommended response

e) Monitoring <-> Generative - Generative models follow very different rules than any of the other models. Genertive models are designed to simulate actual data based on previously recorded data. A vulnerability scanner cloud follow a generative model, as it cloud test different inputs to determine if certain vulnerabilities (such as injection attacks) are present



4. Security Functions in a Cloud Environment

4.1 Identity and Access Managmenet

4.1.1 What is identity and Access managment (IAM)?


a) Identity 

Identity is the process of assigning a unique identifier to every individual user.

Systems then use this identification to determine if a user can have access to a resource.

b) Authentication

Authentication is the process of proving an identity. To do so, the user must submit their credentials to the authentication entity to gain access.

There are several different forms of authentication that you should know about:

- Multifactor Authentication (MFA)

*There are three common factors that can be used for authentication:

** What you have
** What you know
** What you are

*multifactor authentication uses 2 or more of any of those methods in conjunction to add a layer of protection to the authentication process.


-Single Sign-On (SSO) - is a property that allows a user to log into one system, and gain access to all the systems associated with it.


-Federation - is simply allowing SSO across multiple domains. Google and Facebook are two of the biggest Federation providers. This allow our users to authenticate to our systems using their already existing credentials with those systems.

-Tokens -

*Tokens can be hardware or software-based and provide an authontication mechanism around "something you have".

*Hardware tokens cloud be "smart cards" that you can connect to your computer via a card reader that provides authentication.

*Software tokens can generally be installed on any device (such as your mobile phone) and are used to generate a one-time passcode.

c) Authorization

Authorization is very straight forward. Users are assigned access to specific resources containded within our systems (such as specific applications). This access is usually based on their role within the applications. Authorization is the process of determining which identities (after being authenticated) have access to which of those resources and which should be blocked.



4.1.2 Why do we need IAM?

We need IAM for a couple different resons:

Simply put, we need IAM to perform authorization, that is assigning access to resources within our systems.

The other reason we need IAM is for accountability. If an action is performed, how do we know who performed that action? We look at logs. Logs are tied to the to the assigned identity. If we do not have appropriate authentication in place, how can we be sure that the assigned user of that identity  was the one that actually carried out that action? We can't.

4.1.3 What's Different About Coud IAM?

External Users  <-> Federated Identity Provider  <-> Public Cloud
Administrators/ Internal Users <-> Federated Identity Provider <-> Private Cloud

Public Cloud <-> Private Cloud <-> Public Cloud

4.2 Quarantining and Containing Cloud Servers

4.2.1 Quarantining Systems

a) Traditional Environments

What happens during a quarantine?

-One , central managment server

-you can quarantine some server

b) Cloud Enviroments

How Do we Contain Attacks?

-spin up new container/ server in place of quarantine one


4.3 Understanding Cloud Disaster Recovery Procedures

4.3.1 Disaster Recovery

Disaster Recovery is a set of policies, tools, and procedures that an organization has in place to enable recovery or continuation of vital technological infrastructure and systems following a natural or man-made disaster.

DR allows an organization to protect itself from the effects pf significant negative events, while ensuring that mission-essential functionality is maintained.


4.3.2 DR Site Types

There are 3 major DR site types that should understand

a) Hot site - are already available to take over processing in the event that the primary site becomes unavaible. No additional configurations are necessary to become operational.

b) Cold site - are alternative processing centers that have nothing configured. these take the longest to get up and running, but come with significant cost savings

c) Warm site - are more readly avaible than cold sites, as they have some (but not all) configurations in place. With warm sites, you typicaly have to do some configuratons (such as data recovery) to become operational.

High Availability

When we are referring to systems that cannot have any downtime , we must employ  high availablity. high availability is an approach that allows us to maintain availablity regardless of hardware or software failures.

This means that regardless of situation, the organization maintains:
-Data availability
-no unplanned downtime
-Acceptable performance regardless of load

This is generally carried out through redundant services, and utilizes several forms of DR testing to ensure availability.

RTO/RPO

RTO (Recovery Time Objective) and RPO (Recovery Point Objective) are two  of the most important  parts of a DR strategy. These two terms are often confused:

RTO - is the time it takes after a disaster to be fully recovered. For instance, if our service went down at 8, and was restored at 10, we have a RTO of 2 hours

RPO, on the other hand, is the amount of data that it is acceptable to lose from disaster. For instance, our last backup was 6 hours ago, and our RPO is 19 hours. We fall within an acceptable timeframe.



Adventages and disavenatges        |        Our Network -> Data Backup

Adventages:

-Geographically separates the backups from the primary site
-Cloud solution is less expensive than maintaining secondary site

Disadvanteges:

-Data transverses the Internet
-Organization gives up control of the data  contained within the backups to the CSP
-Worse RPO/RTO than managed cloud encvironment


Adventages and disavenatges        |        Managed Cloud Architecture -> Data Replication

Advantages:

-Managed by CSP
-Real-time replication from primary processing site to DR site
-Better RTO/RPO
-High Availabilty

Disadvanteges:

-Generally highly complex
-Very costly

4.4 Practical Security Application  Locations

Protected Network -> Perimeter Network (Possibly Public Cloud) -> Internet

a) Firewall

Firewall monitor traffic and block any that is not specifically allowed. These devices should be used at the perimeter of any network to ensure that the only traffic passing through is what we want to allow.

As you can see in the illustration above , firewalls should be placed on the boundry that separate the "Protected Network" and the "Perimeter Network" as well as the boundry that separates the Perimeter Network from the Internet.

b) IDS/IPS 

Intrution Detection and Prevention Systems (IDS/IPS), as their name implies, monitor and have rules  in place to stop traffic that is indicative of an intrusion.

These should be placed on both networks, between the firewall and the end devices. This allows us to block malicious traffic that may have been allowed through the firewall, due to some existing firewall rules that allow that particular traffic.

c) IAM

Refer to Identity and Access Managment lesson for dpecifics of  what actions IAM services cann perform.

These services should be avaible on both the Perimeter Network as well as within the Protected network, to properly authenticate and authorize users to particular resources.

d) SIEM

SIEMs (Security Information and Event Managers) are useful devices, and are really required for today's environments. They are used to aggregate logs from multiple sources and then corelate those logs to find meaningul events.

SIEMs should be placed within the protected network, within access restricted to only authorized individuals.

e) Other Tools and Techniques

Data encryption
*while in use
* while in transit
*while being stored

API (ApplicationProgramming Interface) Security

5. Maintaining GRC in Cloud Environments

5.1 Maintaining a Risk-Based Approach in a Cloud Environment

GRC (Governanance, Risk Managment, and Compliance) - in order to successfully manage GRC, we must maintain  a risk-based approach for our programs.


Six steps of the Risk managment Framework provided by NIST
https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-53r4.pdf


NIST RMF

-> Categorize Systems -> Select Controls -> Implement Controls -> Access Controls -> Authorize Systems -> monitor Controls -> Categorize Systems -> ...


a) Categorize Systems

It is important that we understanding the information that is contained within our systems. In many cases, the security controls that we implement are a direct result of the information types that the system  process or stories. This is why categorizing systems is the first step of the IST Risk Managment Framework.

In a cloud environment, this can be especially troublesome. Before selecting  a cloud service, an organization should understand the risk , and how to properly categorize their own information, to ensure protections against those risks.

b) Select Controls

After we have identifyed and categorized our information ystems, the next step in the process is selecting  the appropriate security controls that should be in place. certain information types require specific controls. This list could include financial data. Personal Identifiable Information(PII), Personal Health Information (PHI). To get a better understanding of these controls that could apply , read nisT Special Publication (SP) 800-53

Organization should cosult with their CSP to ensure control selection prior to agreeing to terms.


c) Implement Controls

Every organization interprets controls (and their implementation) differently. Organizations often bring in third-party consultant agencies, to assist in determining how to meet regulatory compliance.

To help determine how they can assist in meeting he organization's implementation of the selected controls, the CSP should be consulted prior to entering into any agreement.

d) Access Controls

After implementing the controls, the next step in the process is to ensure that the controls are adequate. Assessing the selected controls and ensuring that they are appropriately implemented gives us a sens that they are operating as intended.

Assessing these controls also gives us the ability to evaluate the CSPs implementation of security controls. In most cases, the CSP does not provide key metrics to the customer. Thus the onus is on the customer to assess their own controls.

e) Authorize Systems

After we have assessed our controls and their implementations, our systems can  be authorized to begin storing  and processing the approved ata types. This step is especially important for any of those sensitive data types we mentioned earlier (PII, PHI, etc.)


For this scenario in a cloud environment , the organization has to take each stride along with the CSP to authorize the system to begin this data processing. Access to the system should be controlled appropriately, while still allowing  everyone to complete their tasks.

f) Monitor Controls

The next step is to monitor and assess the selected controls on an ongoing  basis. This ensures the security of the system maintains an approriate level. Monitoring could include such actions as condcucting security audits ans keeping an eye on key security metrics. This allows us to make continous impovements to the security of our system as we move forward.

In a cloud environment, where our organization does not have full control of the system overfall, this becomes especially important. We not only have to keep track of our own controls and changes, but we must also ensure we understandthe impacts associated with changes made by the CSP.


5.2 Maintaining Regulatory Compliance

5.2.1 Regulations 

a) HIPAA

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is US law that provides data privacy and security provisions for safeguarding medical information. The main objective of HIPAA legislation is to standardize electronic transmissions of medical information.

The Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 (fully adopted in 2013) are guidelines that outline the responsibilities of any organization that handles PHI (Protected health Information).

Any organization that stores, collects, or processes any individual's medical information is subject to HIPAA.

b) SOX

The Sarbanes oOxley Act of 2002 (SOX) is US legislation that enforces auditing and financial regulations for public companies. As part of the SOXAct, particcular security controls must be put in place to ensure that financial records are complete and accurate.

Sections 302 and 404 are especially pertinent to cybersecurity professionals. The act requires regular sudits of internalcontrols, more specifially:

-Access to physical and electronic controls
-Security
-Change managment procedures
-Disaster recovery procedures

c) PSI DSS

The Payment Card industry Data Security Standard (PCI DSS) is the standard that organizations must follow in order to receive payment via credit cardss forom the major card comapnies (American Express, MasterCard, Visa, etc.) The PCI DSS standard is mandated by these card brands and administrered by the Payment Caard Industry Security Standards Cuouncil.

PCI DSS was created to increase security controls around cardholder data to reduce credit card fraud. All organizations that accept credit card payments must follow this standard. There are six requierments to maintain PCI Compliance:

-Build and maintain a secure network
-Protect cardholder data
-maintain a vulnerability management program
-Implement strong access control measures
-Regularly monitor and test networks
-Maintain an information Security Policy

d) FISMA

The Federal information Security Managment Act (FISMA) is a US Federal law passed  in 2002 that requires federal agencies to develop , document, and implement an information security program. It was passed to reduce risks to federal information, as well as set dtandards and guidelines  for these information security programs. FISMA requirements are also extended to state-run agencies that administer federal programs (Medicare, for instance) and private organizationa that are involved in a contractual relationship with me the federal goverment.

FISMA guidelines are almost exclusively derived from the NIST (National Institute of Standards and Technology) 800 series.



5.2.2 Challanges

a) Maintaining Consistency

One of the major issues of migrating from an on-premises solution to a cloud environment is maintaing operaional consistency. From a compliance standpoint, this means updating information security programs and plans, as well as understanding standard operating procedral changes.

One key area to maintaning compliance is ensuring that operations are continuosly examined for effeciency, which leads to the need for understandingoperationsthat the CSP will carry out.

b) Identifying Risks

One key function of regulatory compliance is identifying and ctegorizing risks to an organization's (and by extension, any customer) data. Overall, the industry is lacking an assessment model that allows us to efficiently identify risks in a cloud environment. many organization are scrambling to mold their ownassessment framework into what it should consist of.

Risks are much different in a cloud vs an on-premises environment. There are a few simple steps that sould help in identifying those:

-Collaboration
-Knowledge
-Management engagement
-Following best practices

c) Geographical Data Location

Another Major concern with maintaining regulatory compliance is the geographical location of data. Data protection laws vary from country to country, and in the US, sometimes even from state to state, so data location might be a piece of maintaining compliance.

For instance, a company located in the US that has business interests in the EU must maintain regulatory compliance in both locations, in some cases, this just means ensuring the proper documentation is in place. But it's important to understand those regulations in order to maintain compliance.

5.2.3 Benefits of Shared Responsibilities

Responsibility for security in the cloud is two-fold. The customer is responsible for the data, the applications, identity and access management, and encryption. the CSP is responsible for the database and other storage security, networking, and other infrastructure.

What are the benefits of this shared responsibility?


The are several benefits to this approach:
-The CSP might have to be able to share knowladge about how other clients have maintained complainance
-Experts get to work in their preferred field
-Due to decrase in workload for each individual, more time can be spent on value-add activities (such as threat hunting or security research)
-CSPs employ experts that can help your organization ensure compliance

5.2.4 Tools and Techniques

5.2.4.1 GRC Tools

Governance. Risk, and Compliance tools are necessary for ensuring compliance, regardless off wheter the system is in the cloud  or on-premises. These tools assist in takss such as:

-Identifying risks
-Building information security plans and programs
-Establishing security controls
-Tracking key metrics
-maintaining various inventories

As they can be used in these, GRC tools act as a one-stop shop auditors.

5.2.4.2 Benchmarks

Benchmarking is a process used to assess the current state of an application or system. Benchmarks are used to established a precedence for what is known good state.

Many organizations ofer such benchmarks, such as the Center for Intenet Security (CIS). They offer established security controls around common technologies (such as RHEL, Windows, Oracle, SQL, Apache, etc.) . Other  organizations ofter industry-specific benchmarks that allow organizations to establish appropriate security safeguards on their systems. Those will keep them in compliance with their industry's regulatory mandates.

5.2.4.3 Compliance Monitoring

Applying benchmarks to perticular technologies is useless, unless we track configuration changes to those technologies. This what compliance monitoring is for. We need to watch for any deviation from our  environment's baseline. There are several monitor configurations and determine if there's  been any of this baseline drift. Compliance monitoring is especially important in cloud environments, as the CSP may make changes without properly notifying the customer. This autometed monitoring allows us to identify when that happens, so that we may take some corrective action that will get us back into compliance.

5.2.4.4 Compliance Audits

Compliance audits are useful. They allow us to get an outside agency's views on our systems, processes, policies, etc. to ensure compliance.

Auditors are generally experts on compliance. They study and coordinate with oversight agencies to ensure they have a vast understanding of rules and regulations.

5.3 Today's Relations Between Security and Compliance

5.3.1 Compliance !=Security

a) Compliance
-HIPAA
-PCI DSS
-FISMA
-SOX

b) Security
-Security Programs and Planning
-IAM
-Security Engineering
-Application Testing
-Vulnerability management
-Network Security
-Host-based Security
-Data Loss Prevention

5.3.2 Annual Audit

Audits are a great way for organizations to discover their current compliance posture. Most regulations require an annual audit to be performed, for measuring this compliance. Hwever, there is one major downfall with this approach:

->Downfall

What happens every year right before an audit  is to take place?


We cram. We tie up any loose ends, put new policies and procedures, and then we train our professionals how to answer questions from the auditors. We do this every single year.

Organizations are not doing what needs to be done in order to maintain compliance year-round. That, in turn , has a direct impact on the security posture of the organization.


5.3.3 Relations within the Cloud

Shred Organization's Responsibility and CSP's Responsiblity

Who is responsible for regulatory compliance? Organization





Komentarze

Popularne posty z tego bloga

Kubernetes

Helm

Ansible Tower / AWX