Cybersecurity Archives | simplyblock https://www.simplyblock.io/blog/tags/cybersecurity/ NVMe-First Kubernetes Storage Platform Mon, 03 Feb 2025 11:51:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.simplyblock.io/wp-content/media/cropped-icon-rgb-simplyblock-32x32.png Cybersecurity Archives | simplyblock https://www.simplyblock.io/blog/tags/cybersecurity/ 32 32 Encryption At Rest: A Comprehensive Guide to DARE https://www.simplyblock.io/blog/encryption-at-rest-dare/ Tue, 17 Dec 2024 10:22:44 +0000 https://www.simplyblock.io/?p=4645 TLDR: Data At Rest Encryption (DARE) is the process of encrypting data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key. Today, data is the “new gold” for companies. Data collection happens everywhere and […]

The post Encryption At Rest: A Comprehensive Guide to DARE appeared first on simplyblock.

]]>
TLDR: Data At Rest Encryption (DARE) is the process of encrypting data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key.

Today, data is the “new gold” for companies. Data collection happens everywhere and at any time. The global amount of data collected is projected to reach 181 Zettabytes (that is 181 billion Terabytes) by the end of 2025. A whopping 23.13% increase over 2024.

That means data protection is becoming increasingly important. Hence, data security has become paramount for organizations of all sizes. One key aspect of data security protocols is data-at-rest encryption, which provides crucial protection for stored data.

Data At Rest Encryption or Encryption At Rest

Understanding the Data-in-Use, Data-in-Transit, and Data-at-Rest

Before we go deep into DARE, let’s quickly discuss the three states of data within a computing system and infrastructure.

Data at rest encryption, data in transit encryption, data in use encryption
Figure 1: The three states of data in encryption

To start, any type of data is created inside an application. While the application holds onto it, it is considered as data in use. That means data in use describes information actively being processed, read, or modified by applications or users.

For example, imagine you open a spreadsheet to edit its contents or a database to process a query. This data is considered “in use.” This state is often the most vulnerable as the data must be in an unencrypted form for processing. However, technologies like confidential computing enable encrypted memory to process even these pieces of data in an encrypted manner.

Next, data in transit describes any information moving between locations. Locations means across the internet, within a private network, or between memory and processors. Examples include email messages being sent, files being downloaded, or database query results traveling between the database server and applications—all examples of data in transit.

Last but not least, data at rest refers to any piece of information stored physically on a digital storage media such as flash storage or hard disk. It also considers storage solutions like cloud storage, offline backups, and file systems as valid digital storage. Hence, data stored in these services is also data at rest. Think of files saved on your laptop’s hard drive or photos stored in cloud storage such as Google Drive, Dropbox, or similar.

Criticality of Data Protection

For organizations, threats to their data are omnipresent. Starting from unfortunate human error, deleting important information, coordinated ransomware attacks, encryption of your data, and asking for a ransom to actual data leaks.

Especially with data leaks, most people think about external hackers copying data off of the organization’s network and making it public. However, this isn’t the only way data is leaked to the public. There are many examples of Amazon S3 buckets without proper authorization, databases being accessible from the outside world, or backup services being accessed.

Anyhow, organizations face increasing threats to their data security. Any data breach has consequences, which are categorized into four segments:

  1. Financial losses from regulatory fines and legal penalties
  2. Damage to brand reputation and loss of customer trust
  3. Exposure of intellectual property to competitors
  4. Compliance violations with regulations like GDPR, HIPAA, or PCI DSS

While data in transit is commonly protected through TLS (transport layer encryption), data at rest encryption (or DARE) is often an afterthought. This is typically because the setup isn’t as easy and straightforward as it should be. It’s this “I can still do this afterward” effect: “What could possibly go wrong?”

However, data at rest is the most vulnerable of all. Data in use is often unprotected but very transient. While there is a chance of a leak, it is small. Data at rest persists for more extended periods of time, giving hackers more time to plan and execute their attacks. Secondly, persistent data often contains the most valuable pieces of information, such as customer records, financial data, or intellectual property. And lastly, access to persistent storage enables access to large amounts of data within a single breach, making it so much more interesting to attackers.

Understanding Data At Rest Encryption (DARE)

Simply spoken, Data At Rest Encryption (DARE) transforms stored data into an unreadable format that can only be read with the appropriate encryption or decryption key. This ensures that the information isn’t readable even if unauthorized parties gain access to the storage medium.

That said, the strength of the encryption used is crucial. Many encryption algorithms we have considered secure have been broken over time. Encrypting data once and taking for granted that unauthorized parties can’t access it is just wrong. Data at rest encryption is an ongoing process, potentially involving re-encrypting information when more potent, robust encryption algorithms are available.

Available Encryption Types for Data At Rest Encryption

Symmetric encryption (same encryption key) vs asymmetric encryption (private and public key)
Figure 2: Symmetric encryption (same encryption key) vs asymmetric encryption (private and public key)

Two encryption methods are generally available today for large-scale setups. While we look forward to quantum-safe encryption algorithms, a widely adopted solution has yet to be developed. Quantum-safe means that the encryption will resist breaking attempts from quantum computers.

Anyhow, the first typical type of encryption is the Symmetric Encryption. It uses the same key for encryption and decryption. The most common symmetric encryption algorithm is AES, the Advanced Encryption Standard.

const cipherText = encrypt(plaintext, encryptionKey);
const plaintext = decrypt(cipherText, encryptionKey);

The second type of encryption is Asymmetric Encryption. Hereby, the encryption and decryption routines use different but related keys, generally referred to as Private Key and Public Key. According to their names, the public key can be publicly known, while the private key must be kept private. Both keys are mathematically connected and generally based on a hard-to-solve mathematical problem. Two standard algorithms are essential: RSA (the Rivest–Shamir–Adleman encryption) and ECDSA (the Elliptic Curve Digital Signature Algorithm). While RSA is based on the prime factorization problem, ECDSA is based on the discrete log problem. To go into detail about those problems is more than just one additional blog post, though.

const cipherText = encrypt(plaintext, publicKey);
const plaintext = decrypt(cipherText, privateKey);

Encryption in Storage

The use cases for symmetric and asymmetric encryption are different. While considered more secure, asymmetric encryption is slower and typically not used for large data volumes. Symmetric encryption, however, is fast but requires the sharing of the encryption key between the encrypting and decrypting parties, making it less secure.

To encrypt large amounts of data and get the best of both worlds, security and speed, you often see a combination of both approaches. The symmetric encryption key is encrypted with an asymmetric encryption algorithm. After decrypting the symmetric key, it is used to encrypt or decrypt the stored data.

Simplyblock provides data-at-rest encryption directly through its Kubernetes CSI integration or via CLI and API. Additionally, simplyblock can be configured to use a different encryption key per logical volume for the highest degree of security and muti-tenant isolation.

Key Management Solutions

Next to selecting the encryption algorithm, managing and distributing the necessary keys is crucial. This is where key management solutions (KMS) come in.

Generally, there are two basic types of key management solutions: hardware and software-based.

For hardware-based solutions, you’ll typically utilize an HSM (Hardware Security Module). These HSMs provide a dedicated hardware token (commonly a USB key) to store keys. They offer the highest level of security but need to be physically attached to systems and are more expensive.

Software-based solutions offer a flexible key management alternative. Almost all cloud providers offer their own KMS systems, such as Azure Key Vault, AWS KMS, or Google Cloud KMS. Additionally, third-party key management solutions are available when setting up an on-premises or private cloud environment.

When managing more than a few encrypted volumes, you should implement a key management solution. That’s why simplyblock supports KMS solutions by default.

Implementing Data At Rest Encryption (DARE)

Simply said, you should have a key management solution ready to implement data-at-rest encryption in your company. If you run in the cloud, I recommend using whatever the cloud provider offers. If you run on-premises or the cloud provider doesn’t provide one, select from the existing third-party solutions.

DARE with Linux

As the most popular server operating system, Linux has two options to choose from when you want to encrypt data. The first option works based on files, providing a filesystem-level encryption.

Typical solutions for filesystem-level based encryption are eCryptfs, a stacked filesystem, which stores encryption information in the header information of each encrypted file. The benefit of eCryptfs is the ability to copy encrypted files to other systems without the need to decrypt them first. As long as the target node has the necessary encryption key in its keyring, the file will be decrypted. The alternative is EncFS, a user-space filesystem that runs without special permissions. Both solutions haven’t seen updates in many years, and I’d generally recommend the second approach.

Block-level encryption transparently encrypts the whole content of a block device (meaning, hard disk, SSD, simplyblock logical volume, etc.). Data is automatically decrypted when read. While there is VeraCrypt, it is mainly a solution for home setups and laptops. For server setups, the most common way to implement block-level encryption is a combination of dm-crypt and LUKS, the Linux Unified Key Setup.

Linux Data At Rest Encryption with dm-crypt and LUKS

The fastest way to encrypt a volume with dm-crypt and LUKS is via the cryptsetup tools.

Debian / Ubuntu / Derivates:

sudo apt install cryptsetup 

RHEL / Rocky Linux / Oracle:

yum install cryptsetup-luks

Now, we need to enable encryption for our block device. In this example, we assume we already have a whole block device or a partition we want to encrypt.

Warning: The following command will delete all data from the given device or partition. Make sure you use the correct device file.

cryptsetup -y luksFormat /dev/xvda

I recommend always running cryptsetup with the -y parameter, which forces it to ask for the passphrase twice. If you misspelled it once, you’ll realize it now, not later.

Now open the encrypted device.

cryptsetup luksOpen /dev/xvda encrypted

This command will ask for the passphrase. The passphrase is not recoverable, so you better remember it.

Afterward, the device is ready to be used as /dev/mapper/encrypted. We can format and mount it.

mkfs.ext4 /dev/mapper/encrypted
mkdir /mnt/encrypted
mount /dev/mapper/encrypted /mnt/encrypted

Data At Rest Encryption with Kubernetes

Kubernetes offers ephemeral and persistent storage as volumes. Those volumes can be statically or dynamically provisioned.

For pre-created and statically provisioned volumes, you can follow the above guide on encrypting block devices on Linux and make the already encrypted device available to Kubernetes.

However, Kubernetes doesn’t offer out-of-the-box support for encrypted and dynamically provisioned volumes. Encrypting persistent volumes is not in Kubernetes’s domain. Instead, it delegates this responsibility to its container storage provider, connected through the CSI (Container Storage Interface).

Note that not all CSI drivers support the data-at-rest encryption, though! But simplyblock does!

Data At Rest Encryption with Simplyblock

DARE stack for Linux: dm-crypt+LUKS vs Simplyblock
Figure 3: Encryption stack for Linux: dm-crypt+LUKS vs Simplyblock

Due to the importance of DARE, simplyblock enables you to secure your data immediately through data-at-rest encryption. Simplyblock goes above and beyond with its features and provides a fully multi-tenant DARE feature set. That said, in simplyblock, you can encrypt multiple logical volumes (virtual block devices) with the same key or one key per logical volume.

The use cases are different. One key per volume enables the highest level of security and complete isolation, even between volumes. You want this to encapsulate applications or teams fully.

When multiple volumes share the same encryption key, you want to provide one key per customer. This ensures that you isolate customers against each other and prevent data from being accessible by other customers on a shared infrastructure in the case of a configuration failure or similar incident.

To set up simplyblock, you can configure keys manually or utilize a key management solution. In this example, we’ll set up the key manual. We also assume that simplyblock has already been deployed as your distributed storage platform, and the simplyblock CSI driver is available in Kubernetes.

First, let’s create our Kubernetes StorageClass for simplyblock. Deploy the YAML file via kubectl, just as any other type of resource.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: encrypted-volumes
provisioner: csi.simplyblock.io
parameters:
    encryption: "True"
    csi.storage.k8s.io/fstype: ext4
    ... other parameters
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

Secondly, we generate the two keys. Note the results of the two commands down.

openssl rand -hex 32   # Key 1
openssl rand -hex 32   # Key 2

Now, we can create our secret and Persistent Volume Claim (PVC).

apiVersion: v1
kind: Secret
metadata:
    name: encrypted-volume-keys
data:
    crypto_key1: 
    crypto_key2: 
–--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    annotations:
        simplybk/secret-name: encrypted-volume-keys
    name: encrypted-volume-claim
spec:
    storageClassName: encrypted-volumes
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 200Gi

And we’re done. Whenever we use the persistent volume claim, Kubernetes will delegate to simplyblock and ask for the encrypted volume. If it doesn’t exist yet, simplyblock will create it automatically. Otherwise, it will just provide it to Kubernetes directly. All data written to the logical volume is fully encrypted.

Best Practices for Securing Data at Rest

Implementing robust data encryption is crucial. In the best case, data should never exist in a decrypted state.

That said, data-in-use encryption is still complicated as of writing. However, solutions such as Edgeless Systems’ Constellation exist and make it possible using hardware memory encryption.

Data-in-transit encryption is commonly used today via TLS. If you don’t use it yet, there is no time to waste. Low-hanging fruits first.

Data-at-rest encryption in Windows, Linux, or Kubernetes doesn’t have to be complicated. Solutions such as simplyblock enable you to secure your data with minimal effort.

However, there are a few more things to remember when implementing DARE effectively.

Data Classification

Organizations should classify their data based on sensitivity levels and regulatory requirements. This classification guides encryption strategies and key management policies. A robust data classification system includes three things:

  • Sensitive data identification: Identify sensitive data through automated discovery tools and manual review processes. For example, personally identifiable information (PII) like social security numbers should be classified as highly sensitive.
  • Classification levels: Establish clear classification levels such as Public, Internal, Confidential, and Restricted. Each level should have defined handling requirements and encryption standards.
  • Automated classification: Implement automated classification tools to scan and categorize data based on content patterns and metadata.

Access Control and Key Management

Encryption is only as strong as the key management and permission control around it. If your keys are leaked, the strongest encryption is useless.

Therefore, it is crucial to implement strong access controls and key rotation policies. Additionally, regular key rotation helps minimize the impact of potential key compromises and, I hate to say it, employees leaving the company.

Monitoring and Auditing

Understanding potential risks early is essential. That’s why it must maintain comprehensive logs of all access to encrypted data and the encryption keys or key management solutions. Also, regular audits should be scheduled for suspicious activities.

In the best case, multiple teams run independent audits to prevent internal leaks through dubious employees. While it may sound harsh, there are situations in life where people take the wrong path. Not necessarily on purpose or because they want to.

Data Minimization

The most secure data is the data that isn’t stored. Hence, you should only store necessary data.

Apart from that, encrypt only what needs protection. While this sounds counterproductive, it reduces the attack surface and the performance impact of encryption.

Data At Rest Encryption: The Essential Component of Data Management

Data-at-rest encryption (DARE) has become essential for organizations handling sensitive information. The rise in data breaches and increasingly stringent regulatory requirements make it vital to protect stored information. Additionally, with the rise of cloud computing and distributed systems, implementing DARE is more critical than ever.

Simplyblock integrates natively with Kubernetes to provide a seamless approach to implementing data-at-rest encryption in modern containerized environments. With our support for transparent encryption, your organization can secure its data without any application changes. Furthermore, simplyblock utilizes the standard NVMe over TCP protocol which enables us to work natively with Linux. No additional drivers are required. Use a simplyblock logical volume straight from your dedicated or virtual machine, including all encryption features.

Anyhow, for organizations running Kubernetes, whether in public clouds, private clouds, or on-premises, DARE serves as a fundamental security control. By following best practices and using modern tools like Simplyblock, organizations can achieve robust data protection while maintaining system performance and usability.

But remember that DARE is just one component of a comprehensive data security strategy. It should be combined with other security controls, such as access management, network security, and security monitoring, to create a defense-in-depth approach to protecting sensitive information.

That all said, by following the guidelines and implementations detailed in this article, your organization can effectively protect its data at rest while maintaining system functionality and performance.

As threats continue to evolve, having a solid foundation in data encryption becomes increasingly crucial for maintaining data security and regulatory compliance.

What is Data At Rest Encryption?

Data At Rest Encryption (DARE), or encryption at rest, is the encryption process of data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key.

What is Data-At-Rest?

Data-at-rest refers to any piece of information written to physical storage media, such as flash storage or hard disk. This also includes storage solutions like cloud storage, offline backups, and file systems as valid digital storage.

What is Data-In-Use?

Data-in-use describes any type of data created inside an application, which is considered data-in-use as long as the application holds onto it. Hence, data-in-use is information actively processed, read, or modified by applications or users.

What is Data-In-Transit?

Data-in-transit describes any information moving between locations, such as data sent across the internet, moved within a private network, or transmitted between memory and processors.

The post Encryption At Rest: A Comprehensive Guide to DARE appeared first on simplyblock.

]]>
data-at-rest-encryption–encryption-at-rest data-at-rest–data-in-transit–data-in-use symmetric-encryption-vs-asymmetric-encryption encryption-stack-linux-dm-crypt-luks-and-simplyblock
9 Best Tools for Threat and Risk Management https://www.simplyblock.io/blog/open-source-tools-for-threat-and-risk-management/ Thu, 24 Oct 2024 02:09:56 +0000 https://www.simplyblock.io/?p=3483 What is Threat and Risk Management? In today’s digital landscape, managing and mitigating threats is paramount for businesses to protect their data, systems, and operations. Threat and risk management involves identifying, assessing, and controlling potential threats to your infrastructure and ensuring your organization’s security. A proactive approach to managing risks can prevent disruptions, data breaches, […]

The post 9 Best Tools for Threat and Risk Management appeared first on simplyblock.

]]>
What is Threat and Risk Management?

In today’s digital landscape, managing and mitigating threats is paramount for businesses to protect their data, systems, and operations. Threat and risk management involves identifying, assessing, and controlling potential threats to your infrastructure and ensuring your organization’s security. A proactive approach to managing risks can prevent disruptions, data breaches, and significant financial losses.

What are the best open-source tools for your 9 Best Tools for Threat and Risk Management setup?

As cyber threats become more complex, the demand for robust and effective risk management tools has risen. Businesses are constantly looking for tools that can help them identify vulnerabilities, monitor potential threats, and mitigate risks efficiently. This post will explore nine essential tools to help you improve your threat and risk management strategy.

1. OpenVAS

OpenVAS (Open Vulnerability Assessment System) is an open-source tool for identifying security vulnerabilities in systems and networks. It provides comprehensive vulnerability scanning and reporting, helping organizations detect weaknesses before they can be exploited. OpenVAS is essential for maintaining a secure environment and avoiding emerging threats.

2. Snort

Snort is a powerful open-source intrusion detection and prevention system (IDS/IPS) that helps detect malicious activity on your network. Snort identifies potential threats by analyzing network traffic and generates alerts for suspicious activity. It’s a crucial tool for real-time monitoring and ensuring your network is secure from external threats.

3. Metasploit

Metasploit is a widely used penetration testing tool that helps security professionals simulate real-world attacks to identify system vulnerabilities. It provides a robust framework for testing the effectiveness of your security defenses, allowing you to pinpoint weaknesses and implement necessary patches. Metasploit is essential for proactive threat management and system hardening.

4. Wireshark

Wireshark is a leading network protocol analyzer that helps security teams capture and inspect real-time data packets. By examining network traffic at the packet level, Wireshark can help detect anomalies, suspicious activity, and potential attacks. It’s an invaluable tool for network forensics and incident response, helping you quickly identify and mitigate security threats.

5. OSSEC

OSSEC (Open Source Security) is a comprehensive host-based intrusion detection system (HIDS) that monitors file integrity, log analysis, rootkit detection, and real-time alerts. It provides deep insight into system activity, helping administrators detect unauthorized access or suspicious changes to critical files. OSSEC is a must-have tool for maintaining the integrity of your systems.

6. Nmap

Nmap is an open-source network scanner that helps administrators map their network, identify devices, and discover vulnerabilities. With its ability to detect open ports, services, and operating systems, Nmap is essential for identifying potential entry points for attackers and mitigating risks before they are exploited. It is a foundational tool for network security assessments.

7. Kali Linux

Kali Linux is a specialized Linux distribution designed for advanced penetration testing and security auditing. It comes pre-installed with various security tools, including vulnerability scanners, password crackers, and forensic tools. Kali Linux is ideal for security professionals who need to perform thorough threat and risk assessments across complex infrastructures.

8. Nessus

Nessus is a popular vulnerability scanner security professionals use to identify security holes in their systems. It offers detailed reports on identified vulnerabilities and provides recommendations for remediation. Nessus is highly effective in helping organizations comply with security standards and reduce their exposure to risk.

9. Maltego

Maltego is a powerful open-source intelligence and forensics tool that helps security professionals visualize the relationships between data points across networks, domains, and individuals. It’s particularly useful for threat intelligence gathering and tracking malicious actors. Maltego provides valuable insights into potential risks and helps organizations proactively protect against emerging threats.

Why Choose simplyblock for Threat and Risk Management?

While security tools help prevent attacks, ransomware threats demand robust data protection and recovery capabilities. This is where simplyblock’s specialized approach creates unique value:

  • Immutable Data Protection: Simplyblock provides an immutable backup solution, ensuring your critical data is protected and unalterable by ransomware attacks. Maintaining secure, versioned copies of your data, simplyblock ensures you always have a clean recovery point, even if your primary systems are compromised.
  • Rapid Recovery Capabilities: In the event of a security breach or ransomware attack, simplyblock enables fast recovery of your data without paying ransom. The platform’s efficient recovery mechanisms help minimize downtime and data loss, ensuring business continuity after a successful attack.
  • Zero Trust Architecture: Simplyblock’s approach to data protection implements zero trust principles, ensuring that backup data remains secure and isolated from potential threats. This architectural approach provides an additional defense against sophisticated attacks that might compromise traditional security tools.

How to Optimize Threat and Risk Management with Open-source Tools

This guide explored nine essential open-source threat and risk management tools, from OpenVAS’s vulnerability assessment to Maltego’s threat intelligence capabilities. While these tools excel at different aspects – Snort for intrusion detection, OSSEC for host-based monitoring, and Metasploit for penetration testing, proper implementation is crucial. Tools like Wireshark enable deep packet inspection, while Nmap and Kali Linux provide comprehensive security testing capabilities. Each tool offers unique approaches to identifying and mitigating security risks.

If you’re looking to strengthen your threat and risk management strategy further, Simplyblock offers comprehensive solutions that integrate seamlessly with these tools. These solutions help you secure your infrastructure with high-performance, low-latency storage options.

Ready to optimize your risk management? Contact simplyblock today to learn how we can help you enhance your security infrastructure with solutions tailored to your business needs.

The post 9 Best Tools for Threat and Risk Management appeared first on simplyblock.

]]>
X Best Tools For XYZ (1)
Ransomware Attack Recovery with Simplyblock https://www.simplyblock.io/blog/ransomware-attack-recovery-with-simplyblock/ Tue, 10 Sep 2024 23:26:57 +0000 https://www.simplyblock.io/?p=1645 In 2023, the number of victims of Ransomware attacks more than doubled, with 2024 off to an even stronger start. A Ransomware attack encrypts your local data. Additionally, the attackers demand a ransom be paid. Therefore, data is copied to remote locations to increase pressure on companies to pay the ransom. This increases the risk […]

The post Ransomware Attack Recovery with Simplyblock appeared first on simplyblock.

]]>
In 2023, the number of victims of Ransomware attacks more than doubled, with 2024 off to an even stronger start. A Ransomware attack encrypts your local data. Additionally, the attackers demand a ransom be paid. Therefore, data is copied to remote locations to increase pressure on companies to pay the ransom. This increases the risk of the data being leaked to the internet even if the ransom is paid. Strong Ransomware protection and mitigation are now more important than ever.

Simplyblock provides sophisticated block storage-level Ransomware protection and mitigation. Together with recovery options, simplyblock enables Point-in-Time Recovery (PITR) for any service or solution storing data.

What is Ransomware?

Ransomware is a type of malicious software (also known as malware) designed to block access to a computer system and/or encrypt data until a ransom is paid to the attacker. Cybercriminals typically carry out this type of attack by demanding payment, often in cryptocurrency, in exchange for providing a decryption key to restore access to the data or system.

Statistics show a significant rise in ransomware cyber attacks: ransomware cases more than doubled in 2023, and the amount of ransom paid reached more than a billion dollars—and these are only official numbers. Many organizations prefer not to report breaches and payments, as those are illegal in many jurisdictions.

Number of quarterly Ransomware victims between Q1 2021 and Q1 2024

The Danger of Ransomware Increases

The number and sophistication of attack tools have also increased significantly. They are becoming increasingly commoditized and easy to use, drastically reducing the skills cyber criminals require to deploy them.

There are many best practices and tools to protect against successful attacks. However, little can be done once an account, particularly a privileged one, has been compromised. Even if the breach is detected, it is most often too late. Attackers may only need minutes to encrypt important data.

Storage, particularly backups, serves as a last line of defense. After a successful attack, they provide a means to recover. However, there are certain downsides to using backups to recover from a successful attack:

  • The latest backup does not contain all of the data: Data written between the last backup and the time the attack is unrecoverably lost. Even the loss of one hour of data written to a database can be critical for many enterprises.
  • Backups are not consistent with each other: The backup of one database may not fit the backup of another database or a file repository, so the systems will not be able to integrate correctly after restoration.
  • The latest backups may already contain encrypted data. It may be necessary to go back in time to find an older backup that is still “clean.” This backup, if available at all, may be linked to substantial data loss.
  • Backups must be protected from writes and delete operations; otherwise, they can be destroyed or damaged by attackers. Attackers may also damage the backup inventory management system, making it hard or impossible to locate specific backups.
  • Human error in Backup Management may lead to missing backups.

Simplyblock for Ransomware Protection and Mitigation

Simplyblock provides a smart solution to recover data after a ransomware attack, complementing classical backups.

In addition to writing data to hot-tier storage, simplyblock creates an asynchronously replicated write-ahead log (WAL) of all data written. This log is optimized for high throughput to secondary (low IOPS) storage, such as Amazon S3 or HDD pools, like AWS’ EBS st2 service. If this secondary storage supports write and deletion protection for pre-defined retention periods, as with S3, it is possible to “rewind” the storage to the point immediately before the attack. This performs a data recovery with near-zero RPO (Recovery Point Objective).

A recovery mechanism like this is particularly useful in combination with databases. Before the attack can start, database systems typically have to be stopped. This is necessary as all data and WAL files are in use by the database. This allows for automatically identifying a consistent recovery point with no data loss.

Timeline of a Ransomware attack

In the future, simplyblock plans to enhance this functionality further. A multi-stage attack detection mechanism will be integrated into the storage. Additionally, deletion protection after clearance from attack within a historical time window and precise automatic identification of attack launch points to locate recovery points.

Furthermore, simplyblock will support partial restore of recovery points to enable different service’ data on the same logical volumes to be restored from individual points in time. This is important since encryption of one service might have started earlier or later than for others, hence the point in time to rewind to must be different.

Conclusion

Simplyblock provides a complementary recovery solution to classical backups. Backups support long-term storage of full recovery snapshots. In contrast, write-ahead log-based recovery is specifically designed for near-zero RPO recovery right after a Ransomware attack starts and enables quick and easy recovery for data protection.

While many databases and data-storing services, such as PostgreSQL, may provide the possibility of Point-in-Time Recovery, the WAL segments need to be stored outside the system as soon as they are closed. That said, the RPO would come down to the size of a WAL segment, whereas with simplyblock, due to its copy-on-write nature, the RPO can be as small as one committed write.

Learn more about simplyblock and its other features like thin-provisioning, immediate clones and branches, encryption, compression, deduplication, and more. Or just get started right away and find the best Ransomware attack protection and mitigation to date.

The post Ransomware Attack Recovery with Simplyblock appeared first on simplyblock.

]]>
Number of quarterly Ransomware victims between Q1 2021 and Q1 2024 Timeline of a Ransomware attack
Policy Management at Cloud-Scale with Anders Eknert from Styra (video + interview) https://www.simplyblock.io/blog/policy-management-at-cloud-scale-with-anders-eknert-from-styra-video/ Fri, 07 Jun 2024 12:09:23 +0000 https://www.simplyblock.io/?p=258 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site. In this installment of podcast, we’re joined by Anders Eknert ( Twitter/X , Personal Blog ), a Developer Advocate for Styra, who talks about the functionality of OPA, […]

The post Policy Management at Cloud-Scale with Anders Eknert from Styra (video + interview) appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site.

In this installment of podcast, we’re joined by Anders Eknert ( Twitter/X , Personal Blog ), a Developer Advocate for Styra, who talks about the functionality of OPA, the Open Policy Agent project at Styra, from a developer’s perspective, explaining how it integrates with services to enforce policies. The discussion touches on the broader benefits of a unified policy management system and how OPA and Styra DAS (Declarative Authorization Service) facilitate this at scale, ensuring consistency and control across complex environments. See more information on what the Open Policy Agent project is, what ‘Policy as Code’ is and what tools are available as well as how OPA can help make simplyblock more secure. Also see interview transcript section at the end.

EP15: Policy Management at Cloud-Scale with Anders Eknert from Styra

Key Learnings

What is the Open Policy Agent (OPA) Project?

The Open Policy Agent (OPA) is a framework designed for defining and running policies as code, decoupled from applications, for use cases like authorization, or policy for infrastructure. It allows organizations to maintain a unified approach to policy management across their entire technology stack. Styra, the company behind OPA, enhances its capabilities with two key products: Styra DAS and an enterprise distribution of OPA. Styra DAS is a commercial control plane for managing OPA at scale, handling the entire policy lifecycle. The enterprise distribution of OPA features a different runtime that consumes less memory, evaluates faster, and can connect to various data sources, providing more efficient and scalable policy management solutions.

What is Policy as Code?

Policy as code is a practice where policies and rules are defined, managed, and executed using code rather than through manual processes. This approach allows policies to be versioned, tested, and automated, similar to software development practices. By treating policies as code, organizations can ensure consistency, repeatability, and transparency in their policy enforcement, making it easier to manage and audit policies across complex environments. Tools like Open Policy Agent (OPA) (see above) facilitate policy as code by providing a framework to write, manage, and enforce policies programmatically.

What are the available Policy as Code Tools?

Several tools are available for implementing Policy as Code. Some of the prominent ones include:

  1. Open Policy Agent (OPA) : An open-source framework, to easily write, manage, test, and enforce policies for infrastructure modifications, service communication, and access permissions. See our podcast episode with Anders Eknert from Styra .
  2. HashiCorp Sentinel : A policy as code framework deeply integrated with HashiCorp products like Terraform, Vault, and Consul.
  3. Kubernetes Policy Controller (Kyverno) : A Kubernetes-native policy management tool that allows you to define, validate, and enforce policies for Kubernetes resources.
  4. Azure Policy : A service in Microsoft Azure for enforcing organizational standards and assessing compliance.

These tools help ensure that policies are codified, version-controlled, and easily integrated into CI/CD pipelines, providing greater consistency and efficiency in policy management.

How will OPA help to Make Simplyblock even more Secure?

Integrating Open Policy Agent (OPA) with simplyblock and Kuberntes can enhance security in several ways: Centralized Policy Management: OPA allows defining and enforcing policies centrally, ensuring consistent security policies across all services and environments. Fine-Grained Access Control: OPA provides detailed control over who can access what, reducing the risk of unauthorized access. Policies can, for example, be used to limit access to simplyblock block devices or prevent unauthorized write mounts. Compliance and Auditing: OPA’s policies can be versioned and audited, helping simplyblock to meet your compliance requirements. Using simplyblock and OPA, you have proof of who was authorized to access your data storage at any point in time. Dynamic Policy Enforcement: OPA can enforce policies in real-time, responding to changes quickly and preventing security breaches.

Transcript

Chris Engelbert: Hello everyone, welcome back to this week’s episode of simplyblock’s Cloud Commute Podcast. Today I have a guest with me that I actually never met in person as far as I know. I don’t think we have. No. Maybe just say a few words about you, where you’re from, who you are, and what you’re working with.

Anders Eknert: Sure, I’m Anders. I live here and work in Stockholm, Sweden. I work as a developer advocate, or a DevRel lead even, for Styra, the company behind the Open Policy Agent (OPA) project.

I’ve been here for, I think it’s three and a half years or so. Before that, I was at another company where I got involved in the OPA project. We had a need for a solution to do access control or authorization across a very diverse and complex environment. We had development teams in 12 different countries, seven different programming languages in our cluster, and it was just a big mess. Our challenge was how to do authorization in that kind of environment without having to go out to all of these teams and try to coordinate development work with each change we needed to do.

So that’s basically how I got involved in the OPA project. OPA emerged as a good solution to our problem at that time and, yeah, all these years later I’m still here and I’m having a lot of fun.

Chris Engelbert: All right, cool. So you mentioned Styra, I always thought it was Styra [Steera] to be honest, but okay, fair enough.

Anders Eknert: Yeah, no, the Swedish pronunciation would be ‘Steera’. So you’re definitely right. It is a Swedish word, which means to steer or to navigate.

Chris Engelbert: Oh, okay, yeah.

Anders Eknert: So you’re absolutely right. I’m just using the Americanized, the bastardized pronunciation.

Chris Engelbert: That’s fair, probably because I’m German that would be my initial thought. And it kind of makes sense. So in German it would probably be “steuern” or something.

All right, so tell us a little bit about Styra. You already mentioned the OPA project. I guess we’re coming back to that in a second, but maybe a little bit about the company itself.

Anders Eknert: Yeah, sure. Styra was founded by the creators of OPA and the idea, I think, is like the main thing. Everything at Styra revolves around OPA and I think it always has and I’m pretty sure it always will to some extent.

So what Styra does is we created and maintain the OPA project. We created and maintain a lot of the things you’ll find in the ecosystem around OPA and Styra. And also of course we’re a commercial company. So there are two products that are both based around OPA. One is Styra DAS, which is a commercial control plane, which allows you to manage OPA at scale. So like from the whole kind of policy lifecycle. And then there’s an enterprise distribution of OPA as well, which has basically a whole different runtime, which allows it to consume much less memory, evaluate faster, connect to various data sources and so on. So basically both the distributed component and the centralized component.

Chris Engelbert: Right, okay. You mentioned OPA a few times, I think you already mentioned what it really means, but maybe we need to dig into that a little bit deeper. So I think OPA is the Open Policy Agent. And if I’m not mistaken, it’s a framework to actually build policy as we call it policy as code.

Anders Eknert: That’s right, that’s right. So yeah, the idea behind OPA is basically that you define your policies as code, but not just code as like any other code running or which is kind of coupled to your applications, but rather that you try and decouple that part of your code and move it outside of your application so you can work with that in isolation.

And some common use cases could be things like authorization. And I mentioned before this need where you have like a complex environment, you have a whole bunch of services and you need to control authorization. How do we do authorization here? How do we make changes to this at runtime? How do we know what authorization decisions got logged or what people did in our systems? So how do we do auditing of this? So that is one type of policy and it’s a very common one.

But it doesn’t stop there. Basically anywhere you can define rules, you can define policy. So other common use cases are policy for infrastructure where you want to say like, I don’t want to allow pods to run in my Kubernetes cluster unless they have a well-defined security context or if they don’t allow mounts of certain types and so on. So you basically define the rules for your infrastructure. And this could be things like Terraform plans, Kubernetes resource manifests, or simply just JSON and YAML files on disk. So there are many ways to, and many places where you might want to enforce policy. And the whole idea behind OPA is that you have one way of doing it and it’s a unified way of doing it. So there are many policy engines out there and most of them do this for one particular use case. So there might be a policy engine that does authorization and many others that do infrastructure and so on. But that all means that you’re still going to end up with this problem where policy is scattered all over the place, it looks different, it logs different and so on. While with OPA, you have one unified way of doing this and to work with policy across your whole stack and organization. So that is kind of the idea behind OPA.

Chris Engelbert: So that means if I’m thinking about something like simplyblock being a cloud native block storage, I could prevent services from mounting our block devices through the policies, right? So something like, okay, cool.

Anders Eknert: Right

Chris Engelbert: You mentioned authorization, I guess that is probably the most common thing when people think about policy management in general. What I kind of find interesting is, in the past, when you did those things, there was also often the actual policies or the rules for permission configuration or something. It was already like a configuration file, but with OPA, you kind of made this like the first first-class spot. Like it shouldn’t be in your code. Here’s the framework that you can just drop into or drop before your application, I think, right? It’s not even in the application itself.

Anders Eknert: No, I guess it depends, but most commonly you’ll have like a separate policy repo where that goes. And of course, a benefit of that is like, we’re not giving up on code. Like we still want to treat policy as code. We want to be able to test it. We want to be able to review it. We want to work with all of these things like lint it or what not. We want to work with all these good tools and processes that we kind of established for any development. We want to kind of piggyback on that for policy just as we do for anything else. So if you want to change something in a policy, the way you do that is you submit a pull request. It’s not like you need to call a manager or you need to submit a form or something. That is how it used to be, right? But we want to, as developers, we want to work with these kinds of things like we work with any other type of code.

Chris Engelbert: Right. So how does it look like from a developer’s point of view? I mean, you can use it to, I think automatically create credentials for something like Postgres. Or is that the DAS tool? Do you need one of the enterprise tools for that?

Anders Eknert: No, yeah, creating credentials, I guess, you could definitely use OPA for that. But I think in most cases, what you use OPA for is basically to make decisions that are either most commonly they’re yes or no. ‘So should we allow these credentials?’ would be probably a better use case for OPA. ‘No, we should not allow them because they’re not sufficiently secure’ or what have you. But yeah, you can use OPA and Rego, the policy language, for a whole lot of things and a whole lot of things that we might not have decided for initially. So as an example, like there’s this linter for Rego, which is called Regal that I have been working on for the past year or so. And that linter itself is written mostly in Rego. So we kind of use Rego to define the rules of what you can do in Rego.

Chris Engelbert: Like a small exception.

Anders Eknert: Yeah, yeah. There’s a lot of that.

Chris Engelbert: All right. I mean, you know that your language is good when you can build your own stuff in your own language, right?

Anders Eknert: Exactly.

Chris Engelbert: So coming back to the original question, like what does it look like from a developer’s point of view if I want to access, for example, a Postgres database?

Anders Eknert: Right. So the way OPA works, it basically acts as a layer in between. So you probably have a service between your database and your user or another service. So rather than having that user or service go right to the database, they’d query that service for access. And in that service, you’d have an integration with OPA, either with OPA running as another service or running embedded inside of that service. And that OPA would determine whether access should be allowed or not based on policy and data that it has been provided.

Chris Engelbert: Right. Okay, got it, got it. I actually thought that, maybe I’m wrong because I’m thinking one of the enterprise features or enterprise products, I thought it was its own service that handles all of that automatically, but maybe I misunderstood to be honest. So there are, as you said, there’s OPA enterprise and there is DAS, the declarative authorization service.

Anders Eknert: Yeah, yeah, that’s right. You got it right. I remembered right.

Chris Engelbert: So maybe tell us a little bit about those. Maybe I’m mixing things up here.

Anders Eknert: Sure. So I talked a bit about OPA and OPA access to distributed component or the decision point. So that’s where the decisions are made. So OPA is going to tell the user or another service, should we allow this or not. And once you start to have tens or twenties or hundreds or thousands of these OPAs running in your cluster, and if you have a distributed environment and you want to do like zero trust, microservice authorization or whatnot, you’re going to have hundreds or thousands of OPAs. So the problem that Styra DAS solves is essentially like, how do we manage this at scale? How do I know what version or which policy is deployed in all these environments? How do I manage policy changes between like dev, test, prod, and so on? But it kind of handles the whole policy lifecycle. We talked about testing before. We talked about things like auditing. How are these things logged? How can I search these logs? Can I use these logs to replay a decision and see, like, if I did change this, would it have an impact on the outcome and so on?

So it’s basically the centralized component. If OPA is the distributed component, Styra DAS provides a centralized component which allows things like a security team or even a policy team to kind of gain this level of control that would previously be missing when you just let any developer team handle this on their own.

Chris Engelbert: So it’s a little bit like fleet management for your policies.

Anders Eknert: Yes, that is right.

Chris Engelbert: Okay, that makes sense. And the DAS specifically, that is the management control or the management tool?

Anders Eknert: Yeah, that it is.

Chris Engelbert: Okay.

Anders Eknert: And then the enterprise OPA is a drop-in replacement for OPA adding a whole bunch of features on top of it, like reduced memory usage, direct integrations with data sources, things like Kafka streaming data from Kafka and so on and so forth. So we provide commercial solutions both for the centralized part and the distributed part.

Chris Engelbert: Right, okay. I think now I remember where my confusion comes from. I think I saw OPA Enterprise and saw all the services which are basically source connectors. So I think you already mentioned Kubernetes before, but how does that work in the Kubernetes environment? I think you can, as you said, deploy it as its own service or run it embedded in microservices. How would that apply together somehow? I mean, we’re a cloud podcast.

Anders Eknert: Yeah, of course, of course. So in the context of Kubernetes, there’s basically two use cases. Like the first one we kind of covered, it’s authorization in the level, like inside of the workloads. Our applications need to know that the user trying to do something is authorized to do so. In that context, you’d normally have OPA running as a sidecar or in a gateway or as part of like an envoy proxy or something like that. So it basically provides a layer on top or before any request is hitting an actual application.

Chris Engelbert: In the sense of user operated.

Anders Eknert: Yeah, exactly. So on the next content or the next use case for OPA and Kubernetes is commonly like admission control, where Kubernetes itself or the Kubernetes API is protected by OPA. So whenever you try and make a modification to Kubernetes or the database etcd, the Kubernetes API reaches out to OPA to ask, like should this be allowed or not? So if you try and deploy a pod or a deployment or I don’t know, what have you, what kind of resources, OPA will be provided at resource. Again, it’s just JSON or YAML. So anything that’s JSON or YAML is basically what OPA has to work with. It doesn’t even know, like OPA doesn’t know what a Kubernetes resource is. It just seems like here’s a YAML document or here’s a JSON document. Is this or that property that I expect, is it in this JSON blob? And does it have the values that I need? If it doesn’t, it’s not approved. So we’re going to deny that. So basically just tells the Kubernetes API, no, this should not be allowed and the Kubernetes API will enforce that. So the user will see this was denied because this or that reason.

Chris Engelbert: So that means I can also use it in between any Kubernetes services, everything or anything deployed into Kubernetes, I guess, not just the Kubernetes API.

Anders Eknert: Yeah, anything you try and deploy, like for modifications, is going to have to pass through the Kubernetes API.

Chris Engelbert: That’s a really interesting thing. So I guess going back to the simplyblock use case, that would probably be where our authorization layer or approval layer would sit, basically either approving or denying the CSI deployment.

Anders Eknert: Yeah.

Chris Engelbert: Okay, that makes sense. So because we’re already running out of time, do you think that, or well, I think the answer is yes, but maybe you can elaborate a little bit on that. Do you think that authorization policies or policies in general became more important with the move to cloud? Probably more people have access to services because they have to, something like that.

Anders Eknert: Yeah, I’d say like they were probably just as important back in the days. What really changed with like the invent of cloud and this kind of automation is the level of scale that any individual engineer can work with. Like in the past, you’d have an infra engineer would perhaps manage like 20 machines or something like that. While today they could manage thousands of machines or virtual machines in cloud instances or whatnot.

And once you reach that level of scale, there’s basically no way that you can do policy like manually, that you have a PDF document somewhere where it says like, you cannot deploy things unless these conditions are met. And then have engineers sit and try and make an inventory of what do we have here? And are we all compliant? That doesn’t work.

So that is basically the difference today from how policy was handled in the past. We need to automate every kind of policy check as well just as we automated infrastructure and so with cloud.

Chris Engelbert: Yeah, that makes sense. I think the scale is a good point about that. It was not something I thought about it. I thought in the sense or my thought was more in the sense of like you have probably much bigger teams than you had in the past, which also makes it much more complicated to manage policies or make sure that just like the right people have access. And many times have to have this like access because somebody else is on vacation and it will never be removed again. We all know how it worked in the past.

Anders Eknert: Yeah, yeah. Now, and another difference I think like today compared to 20 years ago is like, at least when I started working in tech, it was like, if you got to any larger company, they’re like, ‘Hey, we’re doing Java here or we’re doing like .NET.’ But if you go to those companies today, they’re like, ‘There’s going to be Python. There’s going to be Erlang. There’s going to be some closure running somewhere. There’s going to be like so many different things.’

This idea of team autonomy and like teams and deciding for themselves what the best solution for any given problem is. And that is, I love that. It’s like, it makes it so much more interesting to work in tech, but it also provides like a huge challenge for anything that is security related because in anything anywhere where you need to kind of centralize or have some form of control, it’s really, really hard. How do you audit something if it’s like in eight different programming languages? Like I can barely understand two of them. Like how would I do that?

Chris Engelbert: How to make sure that all the policies are implemented? If policy change happens, yeah, you’re right. You have to implement it in multiple languages. The descriptor language for the rules isn’t the same. Yeah, that’s a good point. That’s a very good point actually. And just because time, I think I would have like a million more questions, but there’s one thing that I always have to ask. What do you think is like the next big thing in terms of cloud, in your case, authorization policies, but also in the broader scheme of everything?

Anders Eknert: Yeah, sure. So I’d say like, first of all, I think both identity and access control, they are kind of slow moving and for good reasons. There’s not like there’s going to be a revolutionary thing or disruptive event that turns everything around. I think that’s basically where we have to be. We can rely on things to not change or to update too frequently or too dramatically.

So yeah, what would the next big thing is, I still think like this area where we decoupled policy and we worked with it consistently across like large organizations and so on, it’s still the next revolutionary thing. It’s like, there’s definitely a lot of adopters already, but we’re just at a start of this. And again, that’s probably like organizations don’t just swap out their like how they do authorization or identity that could be like a decade or so. So I still think this policy as code while it’s starting to be like an established concept, that it is still the next big thing. And that’s why it’s also so exciting to work with in this space.

Chris Engelbert: All right, fair enough. At least you didn’t say automatic AI generation.

Anders Eknert: No, God no.

Chris Engelbert: That would have been really the next big thing. Now we’re talking. No, seriously. Thank you very much. That was very informative. I loved that. Yeah, thank you for being here.

Anders Eknert: Thanks for having me.

Chris Engelbert: And for the audience, next week, same time, same podcast channel, whatever you want to call that. Hope to hear you again or you hear me again. And thank you very much.

Key Takeaways

In this episode of simplyblock’s Cloud Commute Podcast, host Chris Engelbert welcomes Anders Eknert, a developer advocate and DevRel lead at Styra, the company behind the Open Policy Agent (OPA) project. The conversation dives into Anders’ background, Styra’s mission, and the significance of OPA in managing policies at scale.

Anders Eknert works as a Developer Advocate/DevRel at Styra, the company responsible for the Open Policy Agent (OPA) Project. He’s been with the company for 3.5 years and was previously involved in the OPA project at another company.

Styra created and maintains the OPA project with 2 key products around OPA; 1) Styra DAS, a commercial control plane for managing OPA at scale, handling the entire policy lifecycle and 2) an enterprise distribution of OPA, which has a different runtime and allows it to consume less memory, evaluate faster, connect to various data sources etc. If OPA is the distributed component, Styra DAS is a centralized component.

OPA is a framework to build and run policies – a project for defining policies as code, decoupled from applications, for use cases like authorization, or policy for infrastructure. The idea behind OPA is that it allows a unified way of working with policy across your whole stack and organization.

In the context of Kubernetes, there are 2 key use cases: 1) authorization inside of the workloads where OPA can be deployed as a sidecar or in a gateway or as part of an envoy proxy; 2) admission control where Kubernetes API is protected by OPA.

Anders also talks about the advent of the cloud and how policy management and automation has become essential due to the scale at which engineers today operate. He also discusses the use of diverse programming environments today and team autonomy, both of which necessitate a unified approach to policy management, making tools like OPA crucial.

Anders predicts that policy as code will continue to gain traction, offering a consistent and automated way to manage policies across organizations.

The post Policy Management at Cloud-Scale with Anders Eknert from Styra (video + interview) appeared first on simplyblock.

]]>
EP15: Policy Management at Cloud-Scale with Anders Eknert from Styra
Continuous vulnerability scanning in production with Oshrat Nir from ARMO https://www.simplyblock.io/blog/continuous-vulnerability-scanning-in-production-with-video/ Fri, 24 May 2024 12:11:05 +0000 https://www.simplyblock.io/?p=266 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site. In this installment of podcast, we’re joined by Oshrat Nir ( Twitter/X , Personal Blog ), a Developer Advocate from ARMO, who talks about the importance of runtime […]

The post Continuous vulnerability scanning in production with Oshrat Nir from ARMO appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site.

In this installment of podcast, we’re joined by Oshrat Nir ( Twitter/X , Personal Blog ), a Developer Advocate from ARMO, who talks about the importance of runtime vulnerability scanning. See below for more information on what vulnerability scanning is, what vulnerability scanning tools exist and how simplyblock uses vulnerability scanning. Also see interview transcript at the end.

EP13: Continuous vulnerability scanning in production with Oshrat Nir from ARMO

Key Learnings

What is Vulnerability Scanning?

Vulnerability scanning is a security process that involves using automated tools to identify and evaluate security weaknesses in a computer system, network, or application. The main objective of vulnerability scanning is to find vulnerabilities that could potentially be exploited by attackers to gain unauthorized access, cause disruptions, or steal data. Here’s a more detailed breakdown of what vulnerability scanning entails:

Identification:

  • Asset Discovery: The process begins with identifying all the assets (servers, networks, applications, etc.) within the scope of the scan.
  • Cataloging: Creating a comprehensive list of these assets, including software versions, configurations, and open ports.

Scanning:

  • Automated Tools: Using specialized software tools that automatically scan the identified assets for known vulnerabilities. These tools often maintain a database of known vulnerabilities, which is regularly updated.
  • Types of Scans:

** Network Scans: Focus on identifying vulnerabilities in network devices and configurations. ** Host Scans: Target individual computers and servers to find vulnerabilities in operating systems and installed software. ** Application Scans: Look for security weaknesses in web applications, APIs, and other software applications.

Analysis:

  • Vulnerability Database: Comparing scan results against a database of known vulnerabilities to identify matches.
  • Severity Assessment: Evaluating the severity of identified vulnerabilities based on factors like potential impact, exploitability, and exposure.

Reporting:

  • Detailed Reports: Generating reports that detail the vulnerabilities found, their severity, and recommendations for remediation.
  • Prioritization: Providing a prioritized list of vulnerabilities to address based on their potential impact on the organization.

Remediation:

  • Patch Management: Applying software updates and patches to fix the vulnerabilities.
  • Configuration Changes: Adjusting system and network configurations to eliminate vulnerabilities.
  • Mitigation Strategies: Implementing additional security measures, such as firewalls or intrusion detection systems, to mitigate the risk of exploitation.

Rescanning:

  • Verification: Conducting follow-up scans to ensure that previously identified vulnerabilities have been successfully addressed.
  • Continuous Monitoring: Implementing ongoing scanning to detect new vulnerabilities as they emerge.

What are some Vulnerability Scanning Tools?

There are various vulnerability scanning tools available, each with its own focus and strengths. See some of the main types below: Network Vulnerability Scanners Web Application Scanners Database Scanners Cloud Vulnerability Scanners

Some of the most widely-used tools include: Tenable Nessus : Comprehensive vulnerability scanner for identifying and assessing vulnerabilities, misconfigurations, and malware across various systems. OpenVAS : An open-source tool for vulnerability scanning and management, derived from Nessus. Enterprise TruRisk™ Platform : A cloud-based service that offers continuous vulnerability scanning and compliance management, previously known as QualysGuard. Rapid7 Nexpose : A real-time vulnerability management solution that helps in identifying, prioritizing, and remediating vulnerabilities. Acunetix : Focused on web application security, it identifies vulnerabilities such as SQL injection, cross-site scripting, and other web-related issues. IBM Security QRadar : A security information and event management (SIEM) solution that integrates vulnerability scanning and management. OWASP ZAP (Zed Attack Proxy) : An open-source tool aimed at finding vulnerabilities in web applications. Nikto : An open-source web server scanner that checks for dangerous files, outdated server components, and other security issues. ARMO Kubescape : An open-source Kubernetes security platform offering vulnerability and misconfiguration scanning, risk assessment, as well as reporting on security compliance. See our podcast episode with Oshrat Nir from ARMO . Snyk : A platform to provide vulnerability, misconfiguration, and code security flaws throughout all of the development process. See our podcast episode with Brian Vermeer from Snyk .

How does Simplyblock use Vulnerability Scanning?

Simplyblock employs vulnerability scanning to ensure the security and integrity of its cloud-based aspects of their storage solutions. For the storage clusters, simplyblock seamlessly works with the industry-standard vulnerability scanning solutions. That means that storage clusters, running the simplyblock storage system, inside the customer’s AWS account can be discovered, catalogued, and monitored for outdated software, misconfigurations, and other security risks. This involves using automated tools to identify, assess, and mitigate potential security threats across their infrastructure.

Transcript

Chris Engelbert: Welcome back to the next episode of simplyblock’s Cloud Commute podcast. This week, I have another guest from the security space, something that really is close to my heart. So thank you for being here, Oshrat. Is that actually correct? I forgot to ask up front.

Oshrat Nir: It’s hard to say my name correctly if you’re not a native Hebrew speaker, but Oshrat is close enough.

Chris Engelbert: Okay. It’s close enough. All right. I forgot to ask that. So maybe you do a quick introduction. Who are you? Where are you from? What do you do? And we’ll take it from there.

Oshrat Nir: So thanks, Chris, for having me. This is a great opportunity. My name is Oshrat Nir. I am currently the developer advocate for ARMO and Kubescape, which is our CNCF sandbox project. We have an enterprise and an open source platform that I look after. I’ve been at ARMO for a year and a half. I’ve been in cloud native for about 5.5 years. And before that, I worked in telco. And fun fact about me is that I lived on 3 continents before I was 9 years old.

Chris Engelbert: All right. We’ll come back to that. Or maybe just now. What do you mean you lived on 3continents?

Oshrat Nir: I was born in Germany, which is Europe. Then I left Germany when I was nearly 3and moved to the States. I lived in Philadelphia for6 years. When I was 8.5years old, I moved to Israel and that’s where I’ve been living since.

Chris Engelbert: All right. So you’re not-

Oshrat Nir: I don’t speak German.

Chris Engelbert: Fair enough.

Oshrat Nir: I tried to learn German when I was working for a German company. My friend at Giant, shout out to Giant Swarm. But no, they did a lot of good things for me, like introducing me to cloud native, but German was not one of them.

Chris Engelbert: I agree. I feel sad for everyone who has to learn German. The grammar is such a pain.Anyway, you said you work for ARMO. So tell us a little bit about ARMO, a little bit more than it’s open source or enterprise.

Oshrat Nir: Okay. So ARMO is a cybersecurity company. The co-founders are Shauli Rozen, who is now our CEO, and Ben Hirschberg, who is our CTO, and another co-founder who’s now on the board called Leonid Sandler. Originally, Leonid and Ben come from cybersecurity. They’ve been doing it since the 90s. They built out a really, really good product that required installing an agent in a cluster. It was highly intrusive and very resource intensive. It might’ve been a good idea, but it was like maybe, I don’t know, maybe five years ahead of its time because that was in the days where agent-less was the thing. And it kind of, it became a thing. Then what happened was that NSA and CISA came out with the guidelines for hardening Kubernetes. That was in August of 2021. They grabbed that idea and built an open source misconfiguration scanner based on that framework, and that’s Kubescape.

They built it out, and it went crazy within days. The star chart was nearly perpendicular. It got to thousands of stars very quickly. By the way, we are waiting to get to 10,000 stars. So if anybody uses and likes us, please, we really, really want to celebrate that 10K milestone. We reached 1,000, 3,000, 5,000 stars very quickly. Then we added more frameworks to the misconfiguration scanner, which include CIS benchmarks. I mean, everybody uses the benchmark. These were all things that allowed people to easily adhere to these frameworks and help with continuous compliance. But you can’t, I don’t know, Alice in Wonderland. I worked with Lewis Carroll. ‘You need to run in order to stay in place,’ said the Red Queen to Alice.

So we had to continue to develop the product into a platform because the misconfiguration scanner is not enough. Then we went into CD scanning, image scanning. So there’s image scanning, repository scanning, scan the cluster. We also have an agent-less flavor, which was the original way we worked. Then we decided, even though past experience showed that the market was good for that, to also develop an agent, an operator that you put on your cluster. Because things that you can see from inside the cluster are not the same as things you can see from outside the cluster.That’s really important in terms of security, because you don’t want blind spots. You want to have all your bases covered, if I were to use an American sports analogy. So you want to have everything covered. That’s how Kubescape continued to develop.

At the end of 2023, or yeah, it was December of 2023, no, sorry, December of 2022. We were accepted, Kubescape was accepted by the CNCF as a sandbox project. The first misconfiguration scanner in the CNCF. And we’re still there, happy, growing, and we’re at a bid for incubation. So if I do another plug here now, if you’re using Kubescape and you love it, please add yourself to the adopters list because we want to get to incubation in 2024.We only have 7 months to go, so yeah, please help us with that.

What happened when Kubescape was accepted into the CNCF, we had to break it out of our enterprise offerings, out of our commercial offering. So we broke it out, and now we have two offerings. We have ARMO platform, which is the enterprise offering. It’s either SaaS or as a private installation, whatever works. And of course, Kubescape, which is open source, free for all, anybody can use or contribute. It seems that people really know and love Kubescape.This is the impression I got from when I came back from Paris at the KubeCon. I mean, people stopped at the ARMO booth and said, “Oh, you’re Kubescape.” So yeah, Kubescape is very known. It’s a known brand, and people seem to like it, which is great.

Chris Engelbert: Right, right. So as I said, we just had a guest, like, I think 2weeks ago, Brian Vermeer from Snyk. I just learned it’s actually pronounced Snyk[Sneak]. And they’re also in the security space. But from my understanding, ARMO is slightly different. So Snyk mostly looks at the developer and the build pipeline, trying to make sure that all possible vulnerabilities are found before you actually deploy. Common coding mistakes, like the typical SQL injection, all that kind of stuff is caught before it actually can get into production.But with the onsite or continuous online scanning, whatever you want to call it, ARMO is on the other side of these things, right? So why would you need that? Why would you want that continuous scanning? I mean, if there was no security issue, why would there be one in production at some point?

Oshrat Nir: Okay, so first, let’s kind of dial this a little back. Snyk talks about themselves as an app tech company, and they look at things from the workload or application point of view, and then they work their way down. And they get informed by information from cloud providers, etc.ARMO is the other way around. We start from the infrastructure. Kubernetes infrastructure is like something that has never been before. I mean, Kubernetes is different. You can’t use legacy processes and tools to scan your Kubernetes because you just don’t get everything that you need. Kubernetes is ephemeral, it scales up, it scales down. Containers don’t last as long, so you don’t have time to test them. There’s a lot of things that you could do in the past and you can’t do with Kubernetes.

So the way we look at securing Kubernetes and by extension the applications or the workloads running on it is the fact that we start from the infrastructure. We work off of those frameworks, best practices that we talked about, and we use runtime to inform our security because one of the main problems that people securing Kubernetes have is the fact that if they work according to best practices, their applications break or may break. And what you need to do is understand application behavior and then secure the infrastructure informed bythat.

So it’s sort of a different perspective. We kind of do bottom up and Snyk does top down, and we kind of meet at the application, I would say, because I don’t think Snyk goes all the way down to Kubernetes and we don’t go all the way up to the SaaS or all of those four little acronyms that aren’t exactly in the Kubernetes world, but over Kubernetes.

Chris Engelbert: So as a company, I actually want both tools, right? I want the development side, the bottom up to make sure that I catch as much as possible before even going into production. And I want the top down approach in production to make sure that nothing happens at runtime, because I think ARMO also does compliance testing in terms of that my policies are correct. It looks for misconfiguration. So it looks much more on the operational side, stuff that a lot of the other tools, I think, will not necessarily catch easily.

Oshrat Nir: Correct.ARMO looks again, we are there throughout the software development lifecycle from the beginning, even to the point where you can do registry scanning and repo scanning and image scanning before. And then as you write things and as you build out your pipelines, you put security gateways in the pipelines using ARMO.

And an interesting thing, we have started to leverage eBPF a lot from many of the things that we do. In order to reduce the signal-to-noise ratio, one of the problems that there is in the world of DevOps and in the operations is alert fatigue, a lot of false positives. And peopleareso overwhelmed. And there’s also a missing piece, because again, even in the world of CVEs, when you’re judging things only by their CVSS, only by the severity and the score of the CVE, then you might not be as efficient as you need to be. Because sometimes you have a high severity vulnerability, somewhere, that doesn’t even get loaded into memory. So it’s not a problem that you have to deal with now. You can deal with it somewhere in the future when you have time, which is never, because nobody ever has time.

But the idea is, again, having production informing what happens in operation by saying, ‘Okay, this way the application or the workload needs to work, and this is why I care about this vulnerability and not that vulnerability.’

Chris Engelbert: Right, right.

Oshrat Nir: Now, speaking of that, ARMO is working on introducing, we already have this in beta in Kubescape, but it’s coming out at ARMO as well, on cloud-native detection and response, like runtime, or for runtime. So we have built out, since we’ve been working with the workload, since we’ve been using eBPF to see how applications are supposed to act so that we can secure the infrastructure without breaking the application, what we’re doing now is saying, ‘Okay, so now we know how the application needs to act’, so I can actually alert you on when it’s acting abnormally, and then we have anomaly detection. I can actually detect the fingerprints of malware, and then I can flag that and say, ‘Look, this might be problematic.You might be needing to look at this because you might have a virus,’because people might be scanning CVEs. And sorry for the 90s reference, but I’m a Gen X-er, people might be scanning for CVEs, but they’re not looking for viruses on images. And that’s just the problem waiting to happen.

Chris Engelbert: Especially with something like the XZ issue just recently.

Oshrat Nir: There you go.

Chris Engelbert: And I think that probably opened the eyes of a lot of people, that to what extent or to what length people go to inject stuff into your application and take over either your build pipeline or your eventual production. I think in the XZ situation, it was like a backdoor that would eventually make it into production, so you have access to production systems.

Yeah, I agree.And you said another important thing, and I’m coming from a strong Java background. It’s about dynamically loading libraries or dependencies. And Java was like the prime example in the past. Not everything you had in your classpath was necessarily loaded into RAM or into memory. But you have the same thing for JavaScript, for PHP, for Python, and especially JavaScript, TypeScript, Python. Those are like the big comers, not newcomers, but the big comers or upcomers in terms of dynamic languages. So yeah, I get that. That is really interesting in the sense of you look at runtime and just because something is in your image doesn’t necessarily mean it’s bad. It’s going to be bad the second it’s loaded into memory and is available to the application. That makes a lot of sense.So you said ARMO runs inside the Kubernetes cluster, right? There’s an operator, I guess.

Oshrat Nir: Yeah.

Chris Engelbert: So do I need to be prepared of anything? Is there anything special I need to think about or is it literally you drop it in, and because it’s eBPF and agent-less it does all the magic for me and I don’t have to think about it at all.Like magic.

Oshrat Nir: Yeah, the idea is for you not to think about it. However, we do give users tools. Again, we’re very cognizant of alert fatigue because what happens is people are overwhelmed. So they’ll either work themselves to burnout or start ignoring things. Neither is a good option.

Okay, so what we want to do is thinking about the usability about the processes, not just the UX, but about the processes that are involved. So we have configurable security controls. You can quiet alerts for specific things, either forever, because this is a risk you’re willing to take. Or that’s just the way the app works and you can’t change it or you’re not changing for now.

So you can configure the controls, you can set down alerts for a configurable period of time or forever. And all of these things are in order to bring you to the point where you really, really, focus on the things that you need. And you increase the efficiency of your security work. You only fix what needs are these things. A good example here is a task path. People, I mean, it’s called an attack chain, an attack vector, kill chain, there’s lots of terminology around the same thing. But basically what it says is that there’s a step by step taskor thing that an attacker would use in order to compromise your entity. There are different entry points that are caused by either misconfigurations or viruses or vulnerabilities, etc. So what we do is we provide a visualization of a possible attack path and say, ok, it’s sort of a, I’m hesitant to use the word node because Kubernetes, but it’s kind of a node of the subway map sort of thing where you can basically, you can check for each node what you need to fix. Sometimes there’s one node where you need to fix one misconfiguration and you’re done and you immediately hardened your infrastructure to the point where the attack path is blocked.Of course, you need to fix everything around that. But the first thing you need to do is to make sure that you’re secure now. And that really helps and it increases the efficiency.

Chris Engelbert: Right. So you’re basically cutting off the chain of possible possibilities so that even if a person gets to that point, it’s now stopped in its tracks, basically. All right. That’s interesting. That sounds that sounds very useful.

Oshrat Nir: Yeah, I think that’s an important thing because that’s basically our North Star where we’re saying we know that security work is hard. We know that it’s been delegated to DevOps people that don’t necessarily like it or want to do it and are overwhelmed with other things and want to do things that they find more interesting, which is great. Although, you know, security people,don’t take me personally, I work for a security company. I think it’s interesting. But my point is, is that and this is I’m sorry, this is a Snyk tagline. Sorry, Brian. But but you want security tools that DevOps people will use. And that’s basically what we’re doing at ARMO. We want to create a security tool that DevOps people will use and security people will love. And again, sorry, Snyk. That’s basically the same thing, but we’re coming from the bottom, your from the top.

Chris Engelbert: I to be to be honest, I think that is perfectly fine. They probably appreciate the call out, to be honest.

Right. So because we’re almost running out of time, we’re pretty much running out of time right now. Do you think that there is or what is your feeling about security as a thought at companies? Do they like neglect it a little bit?Do they see it as important as it should be? What is your feeling? Is there headroom?

Oshrat Nir: Well, I spend a lot of time on subreddits of security people. These people are very unhappy. I mean, some of them are really great professionals that want to do a good job and they feel they’re being discounted. Again, there’s this problem where there are tools that they want to use, but the DevOps that the people that they serve them to don’t want to use. So there needs to be a conversation. Security is important. Ok, F16s runs on Kubernetes. Water plants, sewage plants, a lot of important infrastructure runs on Kubernetes. So securing Kubernetes is very important. Now, in order for that to happen, everybody needs to get on board with that. And the only way to get on board with that is to have that conversation and to say, ‘ok, this is what needs to be done. This is what we think you need to do it. Are you are you on board? And if not, how do we get you on board?’ And one of the ways to get you on board is ok, look, you can put this in the CICD pipeline, forget about it until it reminds you. You can scan a repository every time you pull for it or an image every time you pull it. You can you have a VSCode plugin or a GitHub action. And all of these things are in order to have that conversation and say, look, security is important, but we don’t want to distract you from things that you find important. And that’s a conversation that has to happen, has to happen all the time. Security doesn’t end.

Chris Engelbert: Right, right. Ok, last question. Any predictions or any thoughts on the future of security? Anything you see on the horizon that is upcoming or that needs to happen from your perspective?

Oshrat Nir: Runtime is upcoming. It’s like two years, even two years ago, what’s the thing? Nobody was talking about anything else except shift left security. You shift left. DevOps should to do it. We’re done. We shifted left. And we found that even if one thing gets through our shift left, our production workloads are in danger.So next thing on the menu is runtime security.

Chris Engelbert: It’s a beautiful last sentence.Very, very nice. Thank you for being here. It was a pleasure having you.And I hope we we’re going to see. I think we never met in person, which is which is really weird. But since we’re both in the Kubernetes space, there is a good chance we do. And I hope we really do. So thank you very much for being here.

Oshrat Nir: Thanks so much for having me, Chris.

Chris Engelbert: Great. For the audience next week, next episode. I hope you’re listening again. And thank you very much for being here as well. Thank you very much. See ya.

Key Takeaways

Oshrat Nir has been with ARMO for 1.5 years, bringing 5.5 years of experience in cloud native technologies. ARMO, the company behind Kubescape, specializes in open source-based CI/CD & Kubernetes security, allowing organizations to be fully compliant to frameworks like NSA or CIS, as well as secure from code to production.

The founders of ARMO built a great product that required installing an agent in a cluster, which was highly intrusive & resource intensive. It was around five years ahead of its time, according to Oshrat. After the NSA and CISA came with guidelines on Kubernetes, the founders built an open source misconfiguration scanner based on that framework, which was Kubescape.

Kubescape quickly gained popularity, amassing thousands of stars on GitHub and became accepted by the CNCF (Cloud Native Computing Foundation) as a sandbox project – the first misconfiguration scanner in the CNCF. They’re still growing & are aiming to get to incubation in 2024.

Currently they have 2 offerings; the ARMO platform, which is the enterprise offering, and Kubescape, which is open source.

Oshrat also speaks about Snyk, which focuses on application security from a top-down approach, identifying vulnerabilities during development to prevent issues before deployment. ARMO takes a bottom-up approach, starting from the infrastructure and working upward, “We kind of do bottom up and Snyk does top down, and we kind of meet at the application.”

Oshrat also mentions how they have started to leverage eBPF to improve their scanning without changing the applications or infrastructure, which will help their users, particularly to decrease alert fatigue and the number of false positives.

ARMO is also introducing cloud-native detection and response for runtime. Since using eBPF, they are able to integrate additional anomaly detections.

Oshrat also spoke about the importance of the usability of the processes, which is why they have configurable security controls where you can quiet down or configure alerts for a period of time so you can focus on what you need, which greatly increases the efficiency of your security work.

Oshrat underscores the need for dialogue and consensus between security and DevOps teams to prioritize security without overwhelming developers.

Looking ahead, Oshrat predicts that runtime security will be a critical focus, just as shift left security was in the past. ARMO has you covered already.

The post Continuous vulnerability scanning in production with Oshrat Nir from ARMO appeared first on simplyblock.

]]>
EP13: Continuous vulnerability scanning in production with Oshrat Nir from ARMO
Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk https://www.simplyblock.io/blog/automated-vulnerability-detection-throughout-your-pipeline-with-brian-vermeer-from-snyk-video/ Fri, 10 May 2024 12:12:16 +0000 https://www.simplyblock.io/?p=274 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common […]

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common code issues and vulnerabilities throughout your development and deployment pipeline, talks about the necessity of multi checks, the commonly found threads, and how important it is to rebuild images for every deployment, even if the code hasn’t changed.

EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk

Chris Engelbert: Welcome back everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast. Today I have yet another amazing guest with me, Brian from Snyk.

Brian Vermeer: That’s always the question, right? How do you pronounce that name? Is it Snek, Snik, Synk? It’s not Synk. It’s actually it’s Snyk. Some people say Snyk, but I don’t like that. And the founder wants that it’s Snyk. And it’s actually an abbreviation.

Chris Engelbert: All right, well, we’ll get into that in a second.

Brian Vermeer: So now you know, I mean.

Chris Engelbert: Yeah, we’ll get back to that in a second. All right. So you’re working for Snyk. But maybe we can talk a little bit about you first, like who you are, where you come from. I mean, we know each other for a couple of years, but…

Brian Vermeer: That’s always hard to talk about yourself, right? I’m Brian Vermeer. I live in the Netherlands, just an hour and a half south of Amsterdam. I work for Snyk as a developer advocate. I’ve been a long term Java developer, mostly back end developer for all sorts of jobs within the Netherlands. Java champion, very active in the community, specifically the Dutch community. So the Netherlands Java user group and adjacent Java user groups do some stuff in the virtual Java user group that we just relaunched. That I’ve tried to be active and I’m just a happy programmer.

Chris Engelbert: You’re just a happy programmer. Does that even exist?

Brian Vermeer: Apparently, I am the living example.

Chris Engelbert: All right, fair enough. So let’s get back to Snyk and the cool abbreviation. What is Snyk? What does it mean? What do you guys do?

Brian Vermeer: Well, what we do, first of all, we create security tooling for developers. So our mission is to make security an integrated thing within your development lifecycle. Like in most companies, it’s an afterthought. Like one security team trying to do a lot of things and we have something in the pipeline and that’s horrible because I don’t want to deal with that. If all tests are green, it’s fine. But what if we perceive it in such a way as, “Hey, catch it early from your local machine.” Just like you do with unit tests. Maybe that’s already a hard job creating unit tests, but hey, let’s say we’re all good at that. Why not perceive it in that way? If we can catch things early, we probably do not have to do a lot of rework if something comes up. So that’s why we create tooling for all stages of your software development lifecycle. And what I said, Snyk is an abbreviation. So now you know.

Chris Engelbert: So what does it mean? Or do you forget?

Brian Vermeer: So Now You Know.

Chris Engelbert: Oh!

Brian Vermeer: Literally. So now you know.

Chris Engelbert: Oh, that took a second.

Brian Vermeer: Yep. That takes a while for some people. Now, the thought behind that is we started as a software composite analysis tool and people just bring in libraries. They have no clue what they’re bringing in and what kind of implications come with that. So we can do tests on that. We can report of that. We can make reports of that. And you can make the decisions. So now at least you know what you’re getting into.

Chris Engelbert: Right. And I think with implications and stuff, you mean transitive dependencies. Yeah. Stuff like that.

Brian Vermeer: Yeah.

Chris Engelbert: Yeah. And I guess that just got worse with Docker and images and all that kind of stuff.

Brian Vermeer: I won’t say it gets worse. I think we shifted the problem. I mean, we used to do this on bare metal machines as well that these machines also had an operating system. Right. So I’m not saying it’s getting worse, but developers get into more responsibility because let’s say we’re doing DevOps, whatever that may mean. I mean, ask 10 DevOps engineers. That’s nowadays a job. What DevOps is, you probably get a lot of questions about tooling and that, but apparently what we did is tearing down the wall between old fashioned developer creation and getting things to production. So the ops folks, so we’re now responsible as a team to do all of that. And now your container or your environment, your cluster, your code is all together in your Git repository. So it’s all code now. And the team creating it is responsible for it. So yes, it shifted the problem from being in separate teams now to all in one team that we need to create and maintain stuff. So I don’t, I don’t think we’re getting into worse problems. I think we’re, we’re shifting the problems and it’s getting easier to get into problems. That’s, that’s what I, yeah.

Chris Engelbert: Yeah. Okay. We’re, we’re broadened the scope of where you could potentially run into issues. So, so the way it works is that Snyk, I need to remember to say Snyk and not Synk because now it makes sense.

Brian Vermeer: I’m okay with however you call it. As long as you don’t say sync, I’m fine. That’s, then you’re actually messing up letters.

Chris Engelbert: Yeah, sync, sync is different. It’s, it’s not, it’s not awkward and it’s not Worcester. Anyway. So, so that means the, the tooling is actually looking into, I think the dependencies, built environment, whatever ends up in your Docker container or your container image. Let’s say that way, nobody’s using Docker anymore. And all those other things. So basically everything along the pipeline or the built pipeline, right?

Brian Vermeer: Yeah. You can say that actually we start at the custom code that you’re actually writing. So we’re doing static analysis on that as well. Might combine that with stuff that we know from your, let’s say all your dependencies that come in your dependencies, transitive dependencies, like, “hey, you bring in a spring boot starter that has a ton of implications on how many libraries come in.” Are these affected? Yes or no, et cetera, et cetera. That we go one layer deeper or around that, say your, your container images and let’s say it’s Docker because it’s still the most commonly used, but whatever, like any image is built on a base image and probably you streamlined some binaries in there. So what’s there, that’s another shell around the whole application. And then you get into, in the end, for instance, your configuration for your infrastructure is go to the bullet. That can go wrong by not having a security context or like some policies that are not bad or something like that. Some pods that you gave more privileges than you should have because, Hey, it works on my machine, right? Let’s ship it. These kinds of things. So on all these four fronts, we try to provide pooling and test capabilities in such a way that you can choose how you want to utilize these test capabilities, either in a CI pipeline or our local machine or in between or part of your build, whatever fits your needs. And instead of, Hey, this needs to be part of your build pipeline, because that’s how the tool works. And I was a developer myself for back end for backend jobs a long time. And I was the person that was like, if we need to satisfy that tool, I will find a way around it.

Chris Engelbert: Yeah, I hear you.

Brian Vermeer: Which defeats the purpose because, because at that point you’re only like checking boxes. So I think if these tools fit your way of working and implement your way of working, then you actually have an enabler instead of a wall that you bump into every time.

Chris Engelbert: Yeah. That makes a lot of sense. So that means when you, say you start at a code level, I think simple, like the still most common thing, like SQL injection issues, all that kind of stuff, that is probably handled as well, right?

Brian Vermeer: Yeah. SQL injections, path of virtual injections, cross-site scripting, all these kinds of things will get notified and we will, if possible, we will give you remediation advice on that. And then we go levels deeper. So it’s actually like, you can almost say it’s like four different types of scanner that you can use in whatever way you want. Some people are like, no, I’m just only using the dependency analysis stuff. That’s also fine. Like it’s just four different capabilities for basically four levels in your, in your application, because it’s no longer just your binary that you put in. It’s more than that, as we just discussed.

Chris Engelbert: So, when we look at like the recent and not so recent past, I mean, we’re both coming from the Java world. You said you’re also, you were a Java programmer for a long time. I am. I think the, I mean, the Java world isn’t necessarily known for like the massive CVEs. except Log4Shell.

Brian Vermeer: Yeah, that was a big,

Chris Engelbert: Right? Yeah.

Brian Vermeer: The thing, I think, is in the Java world, it’s either not so big or very big. There’s no in between, or at least it doesn’t get the amount of attention, but yeah, Log4Shell was a big one because first of all, props to the folks that maintain that, because I think there were only three active maintainers at that point when the thing came out and it’s a small library that is used and consumed by a lot of bigger frameworks. So everybody was looking at you doing a bad job. It was just three guys that voluntarily maintained it.

Chris Engelbert: So for the people that do not know what Log4Shell was. So Log4J is one of the most common logging frameworks in Java. And there was a way to inject remote code and execute it with basically whatever permission your process had. And as you said, a lot of people love to run their containers with root privileges. So there is your problem right there. But yeah, so Log4Shell was, I think, at least from what I can remember, probably like the biggest CVE in the Java world, ever since I joined.

Brian Vermeer: Maybe that one, but we had in 2017, we had the Apache struts, one that blew, blew, blew away, blew away our friendly neighborhood Equifax. But yeah.

Chris Engelbert: I’m not talking about struts because that was like so long deprecated by that point of time. It was, it was, it was … They deserved it. No, but seriously, yeah. True, true. The struts one was also pretty big, but since we are just recording it, this on April 3rd, there was just like a very, very interesting thing that was like two days ago, three days ago, like April 1st. I think it was actually April 1st, because I initially thought it’s an April’s Fool joke, but it was unfortunately not.

Brian Vermeer: I think it was the last day of March though. So it was not.

Chris Engelbert: Maybe I just saw it like April 1st. To be honest, initially I thought, okay, that’s a really bad April’s Fool. So what we’re talking about is the XZ issue. Maybe you want to say a few words about that or what?

Brian Vermeer: Well, let’s keep it simple. The XZ issue is basically an issue in one of the tools that come with some Linux distributions. And long story short, I’m not sure if they already created exploits on that. I didn’t, I didn’t actually try it because we’ve got folks that are doing the research. But apparently there, because of that tool, you could do nasty stuff such as arbitrary code executions or, or things with going into secure connections. At least it comes with your operating system. So that means if you have a Docker image or whatever image and you’re based on a certain well-known Linux distribution, you might be infected, regardless of whatever your application does. And it’s a big one. If you want to go deeper, there are tons of blogs of people that can explain to you what the actual problem was. But I think for the general developers, like, don’t shut your eyes and like, it’s not on my machine. It might be in your container because you’re using an outdated, now outdated image.

Chris Engelbert: I think there’s two things. First of all, I think it was found before it actually made it into any distribution, which is good. So if you’re, if you’re not using any of the like self-built distributions, you’re probably good. But what I found more interesting about it, that this backdoor was introduced from a person that was working on the tool for quite a while, like over a year or so, basically getting the trust of the actual maintainers and just sneaking stuff in eventually. And that is… That is why I think tools like Snyk or let’s, let’s be blunt, some of the competitors are so important, right? Because it’s, it’s really hard to just follow all of the new CVEs and sometimes they’re not blowing up this big. So you probably don’t even hear about them, but for that reason, it’s really important to have those tools.

Brian Vermeer: I totally agree. I mean, as a development team, it is a side effect for you, like you’re building stuff and you don’t focus on checking manually, whatever is coming in and if it’s vulnerable or not, but you should be aware of these kinds of things. And so if they come in, you can make appropriate choices. I’m not saying you have to fix it. That’s up to you, like, and your threat level and whatever is going on in your company, but you need to be able to make these decisions based on accurate knowledge and have the appropriate knowledge that you can actually make such a decision. And yeah, you don’t want to manually hunt these things down. You want to be actively pinged when something happens to your application that might have implications for it, for your security risk.

Chris Engelbert: Right. And from your own feeling, like, in the past, we mostly deployed like on-prem installations or in like private clouds, but with the shift to public cloud, do we increase the risk factor? Do we increase the attack surface?

Brian Vermeer: Yes. I think the short story, the short thing is, yes, there are more things that we have under our control as a development team. We do not always have the necessary specialties within the team. So we’re doing the best we can, but that means we’ve got multiple attack phases. Like your connection with your application is one thing, but this one is if I can get into your container for some reason, I can use this, even though at some, some things in containers or some things in operating systems might not be directly usable, but part of a chain that causes a problem. So I can get in in one, like if there’s one hole, I could get in and use certain objects or certain binaries in my chain of attacks and make it a domino effect, basically. So you’re, you’re giving people more and more ammunition. So, and as we automate certain things, we do not always have the necessary knowledge about certain things that might become bigger and bigger. Plus the fast pace we’re currently moving. Like, like tell me like 10 years ago, how were you deploying?

Chris Engelbert: I don’t know. I don’t remember. I don’t remember yesterday.

Brian Vermeer: Yeah. But I mean, probably not three times a day, like 10 years ago, we’re probably deploying once a month, you have time to test or something like that. So it’s a combination of doing all within one team, which yes, we should do, but also the fast pace that we need to release nowadays is something like, okay, we’re just doing it. The whole continuous development and continuous deployment is part of this. If you’re actually doing that, of course.

Chris Engelbert: Yeah, that’s, that’s true. I think it would have been like about every two weeks or so. But yeah, you normally had like one week development, one week bug fixing and testing, and then you deployed it. Now it’s like, you do something, you think it’s ready, it runs through the pipeline. And in the best case, it gets deployed immediately. And if something breaks, you gonna fix it. Or are you in the worst case, you roll back if it’s really bad.

Brian Vermeer: But on the other end, say you’re an application developer, and you need to do that stuff in a container. And do you ship it? Are you touching your container if you or rebuild your container if your application didn’t change?

Chris Engelbert: Yes.

Brian Vermeer: Probably, probably, probably a lot of folks won’t, because hey, did some, some things didn’t change, but it can be that the image your base your stuff upon or your base image or however you may manage that can be company wide, or you just will something out of Docker hub or whatever. That’s another layer that might have changed and might have been fixed or might have been vulnerabilities found in it. So it’s not anymore like, ‘hey, I didn’t touch that application. So I don’t have to rebuild.’ Yes, you should because other layers in that whole application changed.

Chris Engelbert: Right, right. And I think you brought up an important other factor. It might be that meanwhile, like, during the last we were in between the last deployment, and now a CVE has been found or something else, right? So you want to make sure you’re going to test it again. And then you have other programming languages, I’m not naming things here. But you might get a different version of the dependency, which is slightly newer. You’re doing a new install, right? And, and all of that are there’s so many different things, applications, these days, even micro services are so complex, because they normally need like, so many different dependencies. And it is hard to keep an eye on that. And that kind of brings me to the next question, like, how does snake play into something like SBOM or the software bill of materials?

Brian Vermeer: Getting into the hype train of SBOMs. Now, it’s not, it’s not just the hype train. I mean, it’s a serious thing. For folks that don’t know, you can compare the SBOM as your ingredients nutrition list for whatever you try to consume to stuff in your face. Basically, what’s in there, you have no clue, the nutrition facts on the package should say what’s in it, right? So that’s how you should perceive an SBOM. If you create an artifact, then you should create a suitable SBOM with it that basically says, ‘okay, I’m using these dependencies and these transitive dependencies, and maybe even these Docker containers or whatever, I’m using these things to create my artifact.’ And a consumer of that artifact is then able to search around that like say a CVE comes up, a new Log4Shell comes up, let’s make it big. Am I infected? That’s the first question, a consumer or somebody that uses your artifact says. And with an SBOM, you have a standardized, well, there are three standards, but nevertheless, like multiple standard, but there’s a standardized way of having that and make it at least machine searchable to see if you are vulnerable or not. So how do we play into that? Yes, you can use our sneak tooling to create SBOMs for your applications or for your containers, that’s possible. We have the capabilities to read SBOMs in to see if these SBOMs contain packages or artifacts or stuff that have known vulnerabilities. So you can again, take the appropriate measures. I think it’s, yes, SBOM is great from the consumer side. So it’s very clear what that stuff that I got from the internet or got from a supplier, because we’re talking about supply chain all the time, from a supplier within stuff that I build upon or that I’m using that I can see if it contains problems or it contains potential problems when something new comes up. And yes, we have capabilities of creating these SBOMs and scanning these SBOMs.

Chris Engelbert: All right. We’re basically out of time. But there’s one more question I still want to ask. And how do you or where do you personally see the biggest trend could be related to Snyk to security in general?

Brian Vermeer: The biggest trend is the hype of AI nowadays. And that is definitely a thing. What people think is that AI is a suitable replacement for a security engineer. Yeah, I exaggerate now, but that’s not because we have demos where we let a code assistant tool, a well known code assistant tool, spit out vulnerable code, for instance. So I think the trend is two things, the whole supply chain, software supply chain, whatever you get into, you should look at one thing. But the other tool is that if people are using AI, don’t trust it blindly. And I think it’s that’s for everything for both stuff in your supply chain, as in generated code by a code assistant. You should know what you’re doing. Like it’s a great tool. But don’t trust it blindly, because it can also hallucinate and bring in stuff that you didn’t expect if you are not aware of what you’re doing.

Chris Engelbert: So yeah. I think that is a perfect closing. It can hallucinate things.

Brian Vermeer: Oh, definitely, definitely. It’s a lot of fun to play with it. It’s also a great tool. But you should know it doesn’t first of all, it doesn’t replace developers that think. Like thinking is still something an AI doesn’t do.

Chris Engelbert: All right. Thank you very much. Time is over. 20 minutes is always super, super short, but it’s supposed to be that way. So Brian, thank you very much for being here. I hope that was not only interesting to me. I actually learned quite a few new things about Snyk because I haven’t looked into it for a couple of years now. So yeah, thank you very much. And for the audience, I hope you’re listening next week. New guest, new show episode, and we’re going to see you again.

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk