Amazon EKS Archives | simplyblock https://www.simplyblock.io/blog/tags/amazon-eks/ NVMe-First Kubernetes Storage Platform Thu, 06 Feb 2025 09:02:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.simplyblock.io/wp-content/media/cropped-icon-rgb-simplyblock-32x32.png Amazon EKS Archives | simplyblock https://www.simplyblock.io/blog/tags/amazon-eks/ 32 32 Best Open Source Tools For Kubernetes https://www.simplyblock.io/blog/9-best-open-source-tools-for-kubernetes/ Tue, 17 Sep 2024 22:44:08 +0000 https://www.simplyblock.io/?p=1627 The Kubernetes ecosystem is vibrant and ever-expanding, driven by a community of developers who are committed to enhancing the way we manage and deploy applications. Open-source tools have become an essential part of this ecosystem, offering a wide range of functionalities that streamline Kubernetes operations. These tools are crucial for automating tasks, improving efficiency, and […]

The post Best Open Source Tools For Kubernetes appeared first on simplyblock.

]]>
Kubernetes Ecosystem facts

The Kubernetes ecosystem is vibrant and ever-expanding, driven by a community of developers who are committed to enhancing the way we manage and deploy applications. Open-source tools have become an essential part of this ecosystem, offering a wide range of functionalities that streamline Kubernetes operations. These tools are crucial for automating tasks, improving efficiency, and ensuring that your Kubernetes clusters run smoothly. As Kubernetes continues to gain popularity, the demand for robust and reliable open-source tools has also increased. Developers and operators are constantly on the lookout for tools that can help them manage their Kubernetes environments more effectively. In this post, we will explore nine must-know open-source tools that can help you optimize your Kubernetes environment.

1. Helm

Helm is often referred to as the package manager for Kubernetes. It simplifies the process of deploying, managing, and versioning Kubernetes applications. With Helm, you can define, install, and upgrade even the most complex Kubernetes applications. By using Helm Charts, you can easily share your application configurations and manage dependencies between them.

2. Prometheus

Prometheus is a leading monitoring and alerting toolkit that’s widely adopted within the Kubernetes community. It collects metrics from your Kubernetes clusters, stores them, and allows you to query and visualize the data. Prometheus is essential for keeping an eye on your infrastructure’s performance and spotting issues before they become critical.

3. Kubectl

Kubectl is the command-line tool that allows you to interact with your Kubernetes clusters. It is indispensable for managing cluster resources, deploying applications, and troubleshooting issues. Whether you’re scaling your applications or inspecting logs, Kubectl provides the commands you need to get the job done.

4. Kustomize

Kustomize is a configuration management tool that helps you customize Kubernetes objects through a file-based approach. It allows you to manage multiple configurations without duplicating YAML manifests. Kustomize’s native support in Kubectl makes it easy to integrate into your existing workflows.

5. Argo CD

Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. It enables you to manage your application deployments through Git repositories, ensuring that your applications are always in sync with your Git-based source of truth. Argo CD offers features like automated sync, rollback, and health status monitoring, making it a powerful tool for CI/CD pipelines.

6. Istio

Istio is an open-source service mesh that provides traffic management, security, and observability for microservices. It simplifies the complexity of managing network traffic between services in a Kubernetes cluster. Istio helps ensure that your applications are secure, reliable, and easy to monitor.

7. Fluentd

Fluentd is a versatile log management tool that helps you collect, process, and analyze logs from various sources within your Kubernetes cluster. With Fluentd, you can unify your log data into a single source and easily route it to multiple destinations, making it easier to monitor and troubleshoot your applications.

8. Velero

Velero is a backup and recovery solution for Kubernetes clusters. It allows you to back up your cluster resources and persistent volumes, restore them when needed, and even migrate resources between clusters. Velero is an essential tool for disaster recovery planning in Kubernetes environments.

9. Kubeapps

Kubeapps is a web-based UI for deploying and managing applications on Kubernetes. It provides a simple and intuitive interface for browsing Helm charts, managing your applications, and even configuring role-based access control (RBAC). Kubeapps makes it easier for developers and operators to work with Kubernetes applications.

Conclusion

These nine open-source tools are integral to optimizing and managing your Kubernetes environment. Each of them addresses a specific aspect of Kubernetes operations, from monitoring and logging to configuration management and deployment. By integrating these tools into your Kubernetes workflow, you can enhance your cluster’s efficiency, reliability, and security.

However, there is more. Simplyblock offers a wide range of benefits to many of the above tools, either by enhancing their capability with high performance and low latency storage options, or by directly integrating with them.

Simplyblock is the intelligent storage orchestrator for Kubernetes . We provide the Kubernetes community with easy to use virtual NVMe block devices by combining the power of Amazon EBS and Amazon S3, as well as local instance storage. Seamlessly integrated as a StorageClass (CSI) into Kubernetes, simplyblock enables Kubernetes workloads with a requirement for high IOPS and ultra low latency. Deployed directly into your AWS account, simplyblock takes full responsibility of your data and storage infrastructure, scaling and growing dynamically to meet your storage demands at any point in time.

Why Choose Simplyblock for Kubernetes?

Choosing simplyblock for your Kubernetes workloads comes with several compelling benefits to optimize your workload performance, scalability, and cost-efficiency. Elastic block storage powered by simplyblock is designed for IO-intensive and predictable low latency workloads.

  • Increase Cost-Efficiency: Optimize resource scaling to exactly meet your current requirements and reduce the overall cloud spend. Grow as needed, not upfront.
  • Maximize Reliability and Speed: Get the best of both worlds with ultra low latency of local instance storage combined with the reliability of Amazon EBS and Amazon S3.
  • Enhance Security: Get an immediate mitigation strategy for availability zone, and even region, outages using simplyblock’s S3 journaling and Point in Time Recovery (PITR) for any application.

If you’re looking to further streamline your Kubernetes operations, simplyblock offers comprehensive solutions that integrate seamlessly with these tools, helping you get the most out of your Kubernetes environment.

Ready to take your Kubernetes management to the next level? Contact simplyblock today to learn how we can help you simplify and enhance your Kubernetes journey.

The post Best Open Source Tools For Kubernetes appeared first on simplyblock.

]]>
Kubernetes Ecosystem facts
RDS vs. EKS: The True Cost of Database Management https://www.simplyblock.io/blog/rds-vs-eks/ Thu, 12 Sep 2024 23:21:23 +0000 https://www.simplyblock.io/?p=1641 Databases can make up a significant portion of the costs for a variety of businesses and enterprises, and in particular for SaaS, Fintech, or E-commerce & Retail verticals. Choosing the right database management solution can make or break your business margins. But have you ever wondered about the true cost of your database management? Is […]

The post RDS vs. EKS: The True Cost of Database Management appeared first on simplyblock.

]]>
Databases can make up a significant portion of the costs for a variety of businesses and enterprises, and in particular for SaaS, Fintech, or E-commerce & Retail verticals. Choosing the right database management solution can make or break your business margins. But have you ever wondered about the true cost of your database management? Is your current solution really as cost-effective as you think? Let’s dive deep into the world of database management and uncover the hidden expenses that might be eating away at your bottom line.

The Database Dilemma: Managed Services or Self-Managed?

The first crucial decision comes when choosing the operating model for your databases: should you opt for managed services like AWS RDS or take the reins yourself with a self-managed solution on Kubernetes? It’s not just about the upfront costs – there’s a whole iceberg of expenses lurking beneath the surface.

The Allure of Managed Services

At first glance, managed services like AWS RDS seem to be a no-brainer. They promise hassle-free management, automatic updates, and round-the-clock support. But is it really as rosy as it seems?

The Visible Costs

  1. Subscription Fees : You’re paying for the convenience, and it doesn’t come cheap.
  2. Storage Costs : Every gigabyte counts, and it adds up quickly.
  3. Data Transfer Fees : Moving data in and out? Be prepared to open your wallet.

The Hidden Expenses

  1. Overprovisioning : Are you paying for more than you are actually using?
  2. Personnel costs : Using RDS and assuming that you don’t need to understand databases anymore? Surprise! You still need team that will need to configure the database and set it up for your requirements.
  3. Performance Limitations : When you hit a ceiling, scaling up can be costly.
  4. Vendor Lock-in : Switching providers? That’ll cost you in time and money.
  5. Data Migration : Moving data between services can cost a fortune.
  6. Backup and Storage : Those “convenient” backups? They’re not free. In addition, AWS RDS does not let you plug in other storage solution than AWS-native EBS volumes, which can get quite expensive if your database is IO-intensive

The Power of Self-Managed Kubernetes Databases

On the flip side, managing your databases on Kubernetes might seem daunting at first. But let’s break it down and see where you could be saving big.

Initial Investment

  1. Learning Curve : Yes, there’s an upfront cost in time and training. You need to have on your team engineers that are comfortable with Kubernetes or Amazon EKS.
  2. Setup and Configuration : Getting things right takes effort, but it pays off.

Long-term Savings

  1. Flexibility : Scale up or down as needed, without overpaying.
  2. Multi-Cloud Freedom : Avoid vendor lock-in and negotiate better rates.
  3. Resource Optimization : Use your hardware efficiently across workloads.
  4. Resource Sharing : Kubernetes lets you efficiently allocate resources.
  5. Open-Source Tools : Leverage free, powerful tools for monitoring and management.
  6. Customization : Tailor your setup to your exact needs, no compromise.

Where are the Savings Coming from when using Kubernetes for your Database Management?

In a self-managed Kubernetes environment, you have greater control over resource allocation, leading to improved utilization and efficiency. Here’s why:

a) Dynamic Resource Allocation : Kubernetes allows for fine-grained control over CPU and memory allocation. You can set resource limits and requests at the pod level, ensuring databases only use what they need. Example: During off-peak hours, you can automatically scale down resources, whereas in managed services, you often pay for fixed resources 24/7.

b) Bin Packing : Kubernetes scheduler efficiently packs containers onto nodes, maximizing resource usage. This means you can run more workloads on the same hardware, reducing overall infrastructure costs. Example: You might be able to run both your database and application containers on the same node, optimizing server usage.

c) Avoid Overprovisioning : With managed services, you often need to provision for peak load at all times. In Kubernetes, you can use Horizontal Pod Autoscaling to add resources only when needed. Example: During a traffic spike, you can automatically add more database replicas, then scale down when the spike ends.

d) Resource Quotas : Kubernetes allows setting resource quotas at the namespace level, preventing any single team or application from monopolizing cluster resources. This leads to more efficient resource sharing across your organization.

Self-managed Kubernetes databases can also significantly reduce data transfer costs compared to managed services. Here’s how:

a) Co-location of Services : In Kubernetes, you can deploy your databases and application services in the same cluster. This reduces or eliminates data transfer between zones or regions, which is often charged in managed services. Example: If your app and database are in the same Kubernetes cluster, inter-service communication doesn’t incur data transfer fees.

b) Efficient Data Replication : Kubernetes allows for more control over how and when data is replicated. You can optimize replication strategies to reduce unnecessary data movement. Example: You might replicate data during off-peak hours or use differential backups to minimize data transfer.

c) Avoid Provider Lock-in : Managed services often charge for data egress, especially when moving to another provider. With self-managed databases, you have the flexibility to choose the most cost-effective data transfer methods. Example: You could use direct connectivity options or content delivery networks to reduce data transfer costs between regions or clouds.

d) Optimized Backup Strategies : Self-managed solutions allow for more control over backup processes. You can implement incremental backups or use deduplication techniques to reduce the amount of data transferred for backups. Example: Instead of full daily backups (common in managed services), you might do weekly full backups with daily incrementals, significantly reducing data transfer.

e) Multi-Cloud Flexibility : Self-managed Kubernetes databases allow you to strategically place data closer to where it’s consumed. This can reduce long-distance data transfer costs, which are often higher. Example: You could have a primary database in one cloud and read replicas in another, optimizing for both performance and cost.

By leveraging these strategies in a self-managed Kubernetes environment, organizations can significantly optimize their resource usage and reduce data transfer costs, leading to substantial savings compared to typical managed database services.

Breaking down the Numbers: a Cost Comparison between PostgreSQL on RDS vs EKS

Let’s get down to brass tacks. How do the costs really stack up? We’ve crunched the numbers for a small Postgres database between using managed RDS service and hosting on Kubernetes. For Kubernetes we are using EC2 instances with local NVMe disks that are managed on EKS and simplyblock as storage orchestration layer.

Scenario: 3TB Postgres Database with High Availability (3 nodes) and Single AZ Deployment

Managed Service (AWS RDS) using three Db.m4.2xlarge on Demand with Gp3 Volumes

Available resources

Costs

Available vCPU: 8 Available Memory: 32 GiB Available Storage: 3TB Available IOPS: 20,000 per volume Storage latency: 1-2 milliseconds

Monthly Total Cost: $2511,18
3-Year Total: $2511,18 * 36 months = $90,402

Editorial: See the pricing calculator for Amazon RDS for PostgreSQL

Self-Managed on Kubernetes (EKS) using three i3en.xlarge Instances on Demand

Available resources

Costs

Available vCPU: 12 Available Memory: 96 GiB Available

Storage: 3.75TB (7.5TB raw storage with assumed 50% data protection overhead for simplyblock) Available IOPS: 200,000 per volume (10x more than with RDS) Storage latency: below 200 microseconds (local NVMe disk orchestrated by simplyblock)

Monthly instance cost: $989.88 Monthly storage orchestration cost (e.g. Simplyblock): $90 (3TB x $30/TB)

Monthly EKS cost: $219 ($73 per cluster x 3)

Monthly Total Cost: $1298.88

3-Year Total: $1298.88 x 36 months = $46,759 Base Savings : $90,402 – $46,759 = $43,643 (48% over 3 years)

That’s a whopping 48% saving over three years! But wait, there’s more to consider. We have made some simplistic assumptions to estimate additional benefits of self-hosting to showcase the real potential of savings. While the actual efficiencies may vary from company to company, it should at least give a good understanding of where the hidden benefits might lie.

Additional Benefits of Self-Hosting (Estimated Annual Savings)

  1. Resource optimization/sharing : Assumption: 20% better resource utilization (assuming existing Kubernetes clusters) Estimated Annual Saving: 20% x 989.88 x 12= $2,375
  2. Reduced Data Transfer Costs : Assumption: 50% reduction in data transfer fees Estimated Annual Saving: $2,000
  3. Flexible Scaling : Avoid over-provisioning during non-peak times Estimated Annual Saving: $3,000
  4. Multi-Cloud Strategy : Ability to negotiate better rates across providers Estimated Annual Saving: $5,000
  5. Open-Source Tools : Reduced licensing costs for management tools Estimated Annual Saving: $4,000

Disaster Recovery Insights

  • RTO (Recovery Time Objective) Improvement : Self-managed: Potential for 40% faster recovery Estimated value: $10,000 per hour of downtime prevented
  • RPO (Recovery Point Objective) Enhancement : Self-managed: Achieve near-zero data loss Estimated annual value: $20,000 in potential data loss prevention

Total Estimated Annual Benefit of Self-Hosting

Self-hosting pays off. Here is the summary of benefits: Base Savings: $8,400/year Additional Benefits: $15,920/year Disaster Recovery Improvement: $30,000/year (conservative estimate)

Total Estimated Annual Additional Benefit: $54,695

Total Estimated Additional Benefits over 3 Years: $164,085

Note: These figures are estimates and can vary based on specific use cases, implementation efficiency, and negotiated rates with cloud providers.

Beyond the Dollar Signs: the Real Value Proposition

Money talks, but it’s not the only factor in play. Let’s look at the broader picture.

Performance and Scalability

With self-managed Kubernetes databases, you’re in the driver’s seat. Need to scale up for a traffic spike? Done. Want to optimize for a specific workload? You’ve got the power.

Security and Compliance

Think managed services have the upper hand in security? Think again. With self-managed solutions, you have granular control over your security measures. Plus, you’re not sharing infrastructure with unknown entities.

Innovation and Agility

In the fast-paced tech world, agility is king. Self-managed solutions on Kubernetes allow you to adopt cutting-edge technologies and practices without waiting for your provider to catch up.

Is the Database on Kubernetes for Everyone?

Definitely not. While self-managed databases on Kubernetes offer significant benefits in terms of cost savings, flexibility, and control, they’re not a one-size-fits-all solution. Here’s why:

  • Expertise: Managing databases on Kubernetes demands a high level of expertise in both database administration and Kubernetes orchestration. Not all organizations have this skill set readily available. Self-management means taking on responsibilities like security patching, performance tuning, and disaster recovery planning. For smaller teams or those with limited DevOps resources, this can be overwhelming.
  • Scale of operations : For simple applications with predictable, low-to-moderate database requirements, the advanced features and flexibility of Kubernetes might be overkill. Managed services could be more cost-effective in these scenarios. Same applies for very small operations or startups in early stages – the cost benefits of self-managed databases on Kubernetes might not outweigh the added complexity and resource requirements.

While database management on Kubernetes offers compelling advantages, organizations must carefully assess their specific needs, resources, and constraints before making the switch. For many, especially larger enterprises or those with complex, dynamic database requirements, the benefits can be substantial. However, others might find that managed services better suit their current needs and capabilities.

Bonus: Simplyblock

There is one more bonus benefit that you get when running your databases in Kubernetes – you can add simplyblock as your storage orchestration layer behind a single CSI driver that will automatically and intelligently serve storage service of your choice. Do you need fast NVMe cache for some hot transactional data with random IO but don’t want to keep it hot forever? We’ve got you covered!

Simplyblock is an innovative cloud-native storage product, which runs on AWS, as well as other major cloud platforms. Simplyblock virtualizes, optimizes, and orchestrates existing cloud storage services (such as Amazon EBS or Amazon S3) behind a NVMe storage interface and a Kubernetes CSI driver. As such, it provides storage for compute instances (VMs) and containers. We have optimized for IO-heavy database workloads, including OLTP relational databases, graph databases, non-relational document databases, analytical databases, fast key-value stores, vector databases, and similar solutions. Simplyblock database storage optimization

This optimization has been built from the ground up to orchestrate a wide range of database storage needs, such as reliable and fast (high write-IOPS) storage for write-ahead logs and support for ultra-low latency, as well as high IOPS for random read operations. Simplyblock is highly configurable to optimally serve the different database query engines.

Some of the key benefits of using simplyblock alongside your stateful Kubernetes workloads are:

  • Cost Reduction, Margin Increase: Thin provisioning, compression, deduplication of hot-standby nodes, and storage virtualization with multiple tenants increases storage usage while enabling gradual storage increase.
  • Easy Scalability of Storage: Single node databases require highly scalable storage (IOPS, throughput, capacity) since data cannot be distributed to scale. Simplyblock pools either Amazon EBS volumes or local instance storage from EC2 virtual machines and provides a scalable and cost effective storage solution for single node databases.
  • Enables Database Branching Features: Using instant snapshots and clones, databases can be quickly branched out and provided to customers. Due to copy-on-write, the storage usage doesn’t increase unless the data is changed on either the primary or branch. Customers could be charged for “additional storage” though.
  • Enhances Security: Using an S3-based streaming of a recovery journal, the database can be quickly recovered from full AZ and even region outages. It also provides protection against typical ransomware attacks where data gets encrypted by enabling Point-in-Time-Recovery down to a few hundred milliseconds granularity.

Conclusion: the True Cost Revealed

When it comes to database management, the true cost goes far beyond the monthly bill. By choosing a self-managed Kubernetes solution, you’re not just saving money – you’re investing in flexibility, performance, and future-readiness. The savings and benefits will be always use-case and company-specific but the general conclusion shall remain unchanged. While operating databases in Kubernetes is not for everyone, for those who have the privilege of such choice, it should be a no-brainer kind of decision.

Is managing databases on Kubernetes complex?

While there is a learning curve, modern tools and platforms like simplyblock significantly simplify the process, often making it more straightforward than dealing with the limitations of managed services. The knowledge acquired in the process can be though re-utilized across different cloud deployments in different clouds.

How can I ensure high availability with self-managed databases?

Kubernetes offers robust features for high availability, including automatic failover and load balancing. With proper configuration, you can achieve even higher availability than many managed services offer, meeting any possible SLA out there. You are in full control of the SLAs.

How difficult is it to migrate from a managed database service to Kubernetes?

While migration requires careful planning, tools and services exist to streamline the process. Many companies find that the long-term benefits far outweigh the short-term effort of migration.

How does simplyblock handle database backups and point-in-time recovery in Kubernetes?

Simplyblock provides automated, space-efficient backup solutions that integrate seamlessly with Kubernetes. Our point-in-time recovery feature allows you to restore your database to any specific moment, offering protection against data loss and ransomware attacks.

Does simplyblock offer support for multiple database types?

Yes, simplyblock supports a wide range of database types including relational databases like PostgreSQL and MySQL, as well as NoSQL databases like MongoDB and Cassandra. Check out our “Supported Technologies” page for a full list of supported databases and their specific features.

The post RDS vs. EKS: The True Cost of Database Management appeared first on simplyblock.

]]>
Simplyblock database storage optimization
How to choose your Kubernetes Postgres Operator? https://www.simplyblock.io/blog/choosing-a-kubernetes-postgres-operator/ Thu, 06 Jun 2024 12:09:42 +0000 https://www.simplyblock.io/?p=260 A Postgres Operator for Kubernetes eases the management of PostgreSQL clusters inside the Kubernetes (k8s) environment. It watches changes (additions, updates, deletes) of PostgreSQL related CRD (custom resource definitions) and applies changes to the running clusters accordingly. Therefore, a Postgres Operator is a critical component of the database setup. In addition to the “simply” management […]

The post How to choose your Kubernetes Postgres Operator? appeared first on simplyblock.

]]>
A Postgres Operator for Kubernetes eases the management of PostgreSQL clusters inside the Kubernetes (k8s) environment. It watches changes (additions, updates, deletes) of PostgreSQL related CRD (custom resource definitions) and applies changes to the running clusters accordingly.

Therefore, a Postgres Operator is a critical component of the database setup. In addition to the “simply” management of the Postgres cluster, it often also provides additional integration with external tools like pgpool II or pgbouncer (for connection pooling), pgbackrest or Barman (for backup and restore), as well as Postgres extension management.

Critical components should be chosen wisely. Oftentimes, it’s hard to exchange one tool for another later on. Additionally, depending on your requirements, you may need a tool with commercial support available, while others prefer the vast open source ecosystem. If both, even better.

But how do you choose your Postgres Operator of choice? Let’s start with a quick introduction of the most common options.

Stolon

Stolon is one of the oldest operators available. Originally released in November 2015 it even predates the term Kubernetes Operator .

The project is well known and has over 4.5k stars on GitHub. However the age shows. Many features aren’t cloud-native, it doesn’t support CRDs (custom resource definition), hence configuration isn’t according to Kubernetes. All changes are handled through a command line tool. Apart from that, the last release 0.17.0 is from September 2021, and there isn’t really any new activity on the repository. You can read that as “extremely stable” or “almost abandoned”. I guess both views on the matter are correct. Anyhow, from my point of view, the lack of activity is concerning for a tool that’s potentially being used for 5-10 years, especially in a fast-paced world like the cloud-native one.

Personally, I don’t recommend using it for new setups. I still wanted to mention it though since it deserves it. It did and does a great job for many people. Still, I left it out of the comparison table further below.

CloudNativePG

CloudNativePG is the new kid on the block, or so you might think. Its official first commit happened in March 2022. However, CloudNativePG was originally developed and sponsored by EDB (EnterpriseDB), one of the oldest companies built around PostgreSQL.

As the name suggests, CloudNativePG is designed to bring the full cloud-native feeling across. Everything is defined and configured using CRDs. No matter if you need to create a new Postgres cluster, want to define backup schedules, configure automatic failover, or scale up and down. It’s all there. Fully integrated into your typical Kubernetes way of doing things. Including a plugin for kubectl.

On GitHub , CloudNativePG skyrocketed in terms of stargazers and collected over 3.6k stars over the course of its lifetime. And while it’s not officially a CNCF (Cloud Native Computing Foundation) project, the CNCF stands pretty strongly behind it. And not to forget, the feature set is on par with older projects. No need to hide here.

All in all, CloudNativePG is a strong choice for a new setup. Fully in line with your Kubernetes workflows, feature rich, and a strong and active community.

Editor’s note: Btw, we had Jimmy Angelakos as a guest on the Cloud Commute podcast , and he talks about CloudNativePG. You should listen in.

Crunchy Postgres Operator (PGO)

PGO, or the Postgres Operator from Crunchy Data , has been around since March 2017 and is a very beloved choice by a good chunk of people. On GitHub, PGO has over 3.7k stars, an active community, quick response times to issues, and overall a pretty stable timeline activity.

Postgres resources are defined and managed through CRDs, as are Postgres users. Likewise, PGO provides integration with a vast tooling ecosystem, such as patroni (for automatic failover), pgbackrest (for backup management), pgbouncer (connection pooling), and more.

A good number of common extensions are provided out of the box, however, adding additional extensions is a little bit more complicated.

Overall, Crunchy PGO is a solid, production-proven option, also into the future. While not as fresh and hip as CloudNativePG anymore, it checks all the necessary marks, and it does so for many people for many years.

OnGres StackGres

StackGres by OnGres is also fairly new and tries to do a few things differently. While all resources can be managed through CRDs, they can also be managed through a CLI, and even a web interface. Up to the point that you can create a resource through a CRD, change a small property through the CLI, and scale the Postgres cluster using the webui. All management interfaces are completely interchangeable.

Same goes for extension management. StackGres has the biggest amount of supported Postgres extensions available, and it’s hard to find an extension which isn’t supported out of the box.

In terms of tooling integration, StackGres supports all the necessary tools to build a highly available, fault tolerant, automatically backed up and restored, scalable cluster.

In comparison to some of the other operators I really like the independent CRD types, giving a better overview of a specific resource instead of bundling all of the complexity of the Postgres and tooling ecosystem configuration in one major CRD with hundreds and thousands of lines of code.

StackGres is my personal favorite, even though it only accumulated around 900 starts on GitHub so far.

While still young, the team behind it is a group of veteran PostgreSQL folks. They just know what they do. If you prefer a bit of a bigger community, and less of a company driven project, you’ll be better off with CloudNativePG, but apart from that, StackGres is the way to go.

Editor’s note: We had Álvaro Hernández, the founder and CEO of OnGres , in our Cloud Commute podcast, talking about StackGres and why you should use it. Don’t miss the first hand information.

AppsCode KubeDB

KubeDB by AppsCode is different. In development since 2017, it’s well known in the community, but an all-in commercial product. That said, commercial support is great and loved.

Additionally, KubeDB isn’t just PostgreSQL. It supports Elasticsearch, MongoDB, Redis, Memcache, MySQL, and more. It’s a real database-in-a-box deployment solution.

All boxes are checked for KubeDB and there isn’t much to say about it other than, if you need a commercially supported operator for Postgres, look no further than KubeDB.

Zalando Postgres Operator

Last but not least, the Postgres Operator from Zalando . Yes, that Zalando, the one that sells shoes and clothes.

Zalando is a big Postgres user and started early with the cloud-native journey. Their operator has been around since 2017 and has a sizable fanbase. On GitHub the operator managed to collect over 4k stars, has a very active community, a stable release cadence, and is a great choice.

In terms of integrations with the tooling ecosystem, it provides less flexibility and is slightly opinionated towards how things are done. It was developed for Zalando’s own infrastructure first and foremost though.

Anyhow, the Zalando operator has been and still is a great choice. I actually used it myself for my previous startup and it just worked.

Which Postgres Kubernetes Operator should I Use?

You already know the answer, it depends. I know we all hate that answer, but it is true. As hinted at in the beginning, if you need commercial support certain options are already out of the scope.

It also depends if you already have another Postgres cluster with an operator running. If it works, is there really a reason to change it or introduce another one for the new cluster?

Anyhow, below is a quick comparison table of features and supported versions that I think are important.

CloudNativePGCrunchy Postgres for KubernetesOnGres StackGresKubeDBZalando Postgres Operator
Tool version1.23.15.5.21.10.0v2024.4.271.12.0
Release date2024-04-302024-05-232024-04-292024-04-302024-05-31
LicenseApache 2Apache 2AGPL3CommercialMIT
Commercial support

Supported PostgreSQL Features

CloudNativePGCrunchy Postgres for KubernetesOnGres StackGresKubeDBZalando Postgres Operator
Supported versions12, 13, 14, 15, 1611, 12, 13, 14, 15, 1612, 13, 14, 15, 169.6, 10, 11, 12, 13, 1411, 12, 13, 14, 15, 16
Postgres Clusters
Streaming replication
Supports Extensions

High Availability and Backup Features

CloudNativePGCrunchy Postgres for KubernetesOnGres StackGresKubeDBZalando Postgres Operator
Hot Standby
Warm Standby
Automatic Failover
Continuous Archiving
Restore from WAL archive
Supports PITR
Manual backups
Scheduled backups

Kubernetes Specific Features

CloudNativePGCrunchy Postgres for KubernetesOnGres StackGresKubeDBZalando Postgres Operator
Backups via Kubernetes
Custom resources
Uses default PG images
CLI access
WebUI
Tolerations
Node affinity

How to choose your Postgres Operator?

The graph below shows the GitHub stars of the above Postgres Operators. What we see is a clear domination of Stolon, PGO, and Zalando’s PG Operator, with CloudNativePG rushing in from behind. StackGres, while around longer than CloudNativePG doesn’t have the community backing behind it, yet. But GitHub stars aren’t everything.

All of the above tools are great options, with the exception of Stolon, which isn’t a bad tool, but I’m concerned about the lack of activity. Make of it what you like. GitHub stars history for CloudNativePG, Crunchy PGO, StackGres, the Zalando Postgres Operator, KubeDB (docs), and Stolon

Before closing, I want to quickly give some honorable mentions to two further tools.

Percona has an operator for PostgreSQL, but the community is very small right now. Let’s see if they manage to bring it on par with the other tools. If you use other Percona tools, it’s certainly worth giving it a look: Percona Operator for PostgreSQL .

The other one is the External PostgreSQL Server Operator by MoveToKube . It didn’t really fit the topic of this blog post as it’s less of a Postgres Operator but more of a database (in Postgres’ relational entity sense) and users management tool. Meaning, it uses CRDs to add, update, remove databases in external PG servers, as it does for Postgres users. Anyhow, this tool also works with services like Timescale Cloud, Amazon RDS, and many more. Worth mentioning and maybe you can make use of it in the future.

The post How to choose your Kubernetes Postgres Operator? appeared first on simplyblock.

]]>
GitHub stars history for CloudNativePG, Crunchy PGO, StackGres, the Zalando Postgres Operator, KubeDB (docs), and Stolon
Block Storage Volume Pooling for the Cloud-Age https://www.simplyblock.io/blog/block-storage-volume-pooling-for-the-cloud-age/ Wed, 17 Apr 2024 12:13:28 +0000 https://www.simplyblock.io/?p=290 If you have services running in the AWS, you’ll eventually need block storage to store data. Services like Amazon EBS (Elastic Block Storage) provide block storage to be used in your EC2 instances, Amazon EKS (Elastic Kubernetes Services), and others. While providing an easy to use, and fast option, there are several limitations you’ll eventually […]

The post Block Storage Volume Pooling for the Cloud-Age appeared first on simplyblock.

]]>
If you have services running in the AWS, you’ll eventually need block storage to store data. Services like Amazon EBS (Elastic Block Storage) provide block storage to be used in your EC2 instances, Amazon EKS (Elastic Kubernetes Services), and others. While providing an easy to use, and fast option, there are several limitations you’ll eventually run into.

Amazon EBS: the Limitations

When building out a new system, quick iterations are generally key. That includes fast turnovers to test ideas, or validate approaches. Using out of the box services, like Amazon EBS helps with these requirements. Cloud providers like AWS offer these services to get customers started quickly.

That said, cloud block storage volumes, such as Amazon EBS (gp3, io2, io2 Block Express), provide fast storage with high IOPS, and low latency for your compute instances (Amazon EC2) or Kubernetes environments (Amazon EKS or self-hosted) in the same availability zones.

While quick to get started, eventually you may run into the following shortcomings, which will make scaling either complicated or expensive.

Limited Free IOPS: The number of free IOPS is limited on a per volume basis, meaning that if you need high IOPS numbers, you have to pay extra. Sometimes you even have to change the volume type (gp3 only supports up to 16k IOPS, whereas io2 Block Express supports up to 256k IOPS).

Limited Durability: Depending on the selected volume type you’ll have to deal with limited data durability and availability (e.g. gp3 offers only 99.9% availability). There is no way to buy your way out of it, except using two volumes with some RAID1 like configuration.

No Thin Provisioning: Storage cost is paid per provisioned capacity per time unit and not by actual usage (which is around 50% less). When a volume is created, the given capacity is used for calculating the price, meaning if you create a 1TB volume but only use 100GB, you’ll still pay for the 1TB.

Limited Capacity Scalability: While volumes can be grown in size, you can only increase the capacity once every 6h (at least on Amazon EBS). Therefore, you need to estimate upfront how large the volume will grow in that time frame. If you miscalculated, you’ll run out of memory.

No Replication Across Availability Zones: Volumes cannot be replicated across availability zones, limiting the high availability if there are issues with one of the availability zones. This can be mitigated using additional tools, but they invoke additional cost.

Missing Multi-Attach: Attaching a volume to multiple compute instances or containers offers shared access to a dataset. Depending on volume-type, there is no option to multi-attach a volume to multiple instances.

Limited Latency: Depending on volume type, the access latency of a volume is located in the single or double-digit milliseconds range. Latency may also fluctuate. With low and predictable latency requirements, you may be limited here.

Simplyblock, Elastic Block Storage for the Cloud-Age

Simplyblock is built from the ground up to break-free of the typical cloud limitations and provide sub-millisecond predictable latency, virtually unlimited IOPS, while empowering you with scalability and a minimum five9s (99.999%) availability. Simple simplyblock setup with one storage node being connected to from three Kubernetes worker nodes via the NVMe over TCP protocol, offering virtual NVMe storage volumes

To overcome your typical cloud block storage service limitations, simplyblock implements a transparent and dynamic pooling of cloud-based block storage volumes, combining the individual drives into one large storage pool.

In its simplest form, block storage is pooled in a single, but separated storage node (typically a virtual machine). From this pooled storage, which can be considered a single, large virtual disk (or stripe), we carve out logical volumes. These logical volumes can differ in capacity and their particular performance characteristics (IOPS, throughput). All logical volumes are thin-provisioned, thus making more efficient use of raw disk space available to you.

On the client-side, meaning Linux and Windows, no additional drivers are required. Simplyblock’s logical volumes are exported as NVMe devices and use the NVMe over Fabrics industry standard for fast SSD storage, hence the NVMe initiator protocol for NVMe over TCP is already part of the operating system kernels on Linux and the latest versions of Windows Server. Simplyblock logical volumes are designed for ease of use and the out of the box experience, simply by partitioning them and formatting them with any operating-system specific file system.

Additional data services include instant snapshotting of volumes (for fast backup) and instant cloning (for speed and storage efficiency) as well as storage replication across availability zones (for disaster recovery purposes). All of this is powered by the copy-on-write nature of the simplyblock storage engine.

Last but not least, logical volumes can also be multi-attached to multiple compute instances.

More importantly though, simplyblock has its own Kubernetes CSI driver for automated container storage lifecycle management under the Kubernetes ecosystem.

Scaling out the Simplyblock Storage Pool

If the processing power of a single storage node isn’t sufficient anymore, or high-availability is required, you will use a cluster. When operating as a cluster, multiple storage nodes are combined as a single virtual storage pool and compute instances are connected to all of them.

Simplyblock cluster being connected to Kubernetes workers through multi-pathing

In this scenario, a transparent online fail-over mechanism takes care of switching the connection of logical volumes from one node to another in case of connection issues. This mechanism (NVMe multi-pathing with ANA) is already built into the operating system kernels of Linux and Windows Server, therefore, no additional software is required on the clients.

It is important to note that clusters can be expanded by either increasing the attached block storage pools (on the storage nodes) or by adding additional storage nodes to the cluster. This expansion can happen online, doesn’t require any downtime, and eventually results in an automatically re-balanced storage cluster (background operation).

Simplyblock and Microsecond Latency

When double-digit microsecond latency is required, simplyblock can utilize client-local NVMe disks as caches.

Simplyblock cluster with client-local caches In this case simplyblock can boost read IOPS and decrease read access latency to below 100 microseconds. To achieve this you configure the local NVMe devices as a write-through (read) cache. Simplyblock client-local caches are deployed as containers. In a typical Kubernetes environment, this is done as part of CSI driver deployment via the helm chart. Caches are transparent to the compute instances and containers and look like any access to a local NVMe storage. They are, however, managed as part of the simplyblock cluster.

Simplyblock as Hyper-converged instead of Disaggregated

Simplyblock running in hyper-converged mode, alongside other services

If you have sufficient spare capacity (CPU, RAM resources) on your compute instances and don’t want to deploy additional, separated storage nodes, a hyper-converged setup can be chosen as well. A hyper-converged setup is more cost-efficient as no additional virtual server is required.

On the other hand, resources are shared between services consuming storage and the simplyblock storage engine. While this is isn’t necessarily a problem, it requires some additional capacity planning on your end. Simplyblock generally recommends a disaggregated setup where storage and compute resources are strictly separated.

Simplyblock: Scalable Elastic Block Storage Made Easy

No matter what deployment strategy you choose, storage nodes are connected to simplyblock’s the hosted and managed control plane. The control plane is highly scalable and can serve thousands of storage clusters. That said, if you deploy multiple clusters, the management infrastructure is shared between all of them.

Likewise, all of the shown deployment options are realized using different deployment configurations of the same core components. Additionally, none of the configurations requires special software on the client side.

Anyhow, simplyblock enables you to build your own elastic block storage, overcoming the shortcomings of typical cloud provider offered services, such as Amazon EBS. Simplyblock provides advantages as overcome the (free) IOPS limit with a single volumes benefitting from 100k or more free IOPS (compared to e.g. 3,000 for aws gp3) reach read access latency lower than 100 microseconds multi-attach a volume to many instances replicate a volume across availability zones (synchronous and asynchronous replication) and bring down the cost of capacity / increase storage efficiency by multiple times via thin provisioning and copy-on-write clones scale the cluster according to your needs with zero downtime cluster extension

If you want to learn more about simplyblock see why simplyblock, or do you want to get started right away ?

The post Block Storage Volume Pooling for the Cloud-Age appeared first on simplyblock.

]]>
Simple simplyblock setup with one storage node being connected to from three Kubernetes worker nodes via the NVMe over TCP protocol, offering virtual NVMe storage volumes Simplyblock cluster being connected to Kubernetes workers through multi-pathing Simplyblock cluster with client-local caches Simplyblock running in hyper-converged mode, alongside other services
Building a Time Series Database in the Cloud with Steven Sklar from QuestDB (video + interview) https://www.simplyblock.io/blog/building-a-time-series-database-in-the-cloud-with-steven-sklar-from-questdb/ Fri, 12 Apr 2024 12:13:27 +0000 https://www.simplyblock.io/?p=293 This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment of the podcast, we talked to Steven Sklar ( his private blog , X/Twitter ) from QuestDB , a company producing a time series […]

The post Building a Time Series Database in the Cloud with Steven Sklar from QuestDB (video + interview) appeared first on simplyblock.

]]>
This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment of the podcast, we talked to Steven Sklar ( his private blog , X/Twitter ) from QuestDB , a company producing a time series database for large IoT, metrics, observability and other time-component data sets, talks about how they implemented their database offering, from building their own operator to how storage is handled.

Chris Engelbert: Hello, everyone. Welcome back to another episode of simpleblock’s Cloud Commute Podcast. Today, I’m joined by Steven Sklar from QuestDB. He was recommended by a really good friend and an old coworker who’s also at QuestDB. So hello, Steven, and good to have you.

Steven Sklar: Thank you. It’s really a pleasure to be here, and I’m looking forward to our chat.

Chris Engelbert: All right, cool. So maybe just start with a quick introduction. I mean, we already know your name, and I hope I pronounced that correctly. But what else is to talk about you?

Steven Sklar: Sure. So I kind of have a nontraditional background. I started with a degree in economics and finance and worked on Wall Street for a little bit. I like to say in most of my conference talks on the first slide that my first programming language was actually Excel VBA, which I do still have a soft spot for. And I found myself on a bond trading desk and kind of reached the boundaries of Excel and started working in C# and SQL Server, realized I liked that a lot more than just kind of talking to people on the phone and negotiating over various mortgage bonds and things. So I moved into the IT realm and software development and have been developing software ever since. So I moved on from C# into the Python world, moved on from finance into the startup world, and I currently am QuestDB, as you mentioned earlier.

Chris Engelbert: Right. So maybe you can say a few words about QuestDB. What is it? What does it do? And why do people want to use it?

Steven Sklar: Sure. QuestDB is a time series database with a focus on high performance. And I like to think of ease of usability. So we can ingest up to like millions of rows per second on some benchmarks, which is just completely mind-blowing to me. It’s actually written primarily in Java, which doesn’t necessarily go hand in hand with high performance, but we’ve rewritten most of the standard library to avoid memory allocation. So I know it actually truly is high performance. We’ve actually been introducing Rust as well into the code base. You can query the database using plain old SQL. And it really fits into several use cases, like financial tick by tick data and sensor data. I have one going on in my house right now, collecting all of my smart home stuff from Home Assistant. And I mean, yes, I’ve been here for around a year and a half, I want to say. And it’s been a great ride.

Chris Engelbert: Right. So you mentioned time series. And I’m aware what time series are because I’ve been at a competitor before that. So Jaromir and I went slightly different directions, but we both ended up in the time series world. But for the audience, that may not be perfectly aware what time series are. You already mentioned tick data from the financial background. You also mentioned Home Assistant and IoT data, which is great because I’m doing the same thing for me. It’s most like energy consumption and stuff. But maybe you have some more examples.

Steven Sklar: Sure. Kind of a canonical one is monitoring and metrics. So any kind of data, I think, has a time component. Because it’s and I think you need a specialized database. A lot of people ask, well, why not just use Postgres or any of the common databases? And you could, but you’re probably not going to scale. And you’re going to hit a point where your queries are just not performing. And time series databases, in many cases, ours in particular as well, I can speak to, is a columnar database. So it stores data in a different format than you normally would see in a traditional database. And that makes querying and actually ingesting data from a wide range of sources much more efficient. And you kind of like to think of it as, I don’t want to put myself on the spot and do mental math. But imagine if you have 10,000 devices that are sending information to your database for more than a second. It’s not that big of a deal. But maybe, let’s say, you scale and you end up with a million devices. All of a sudden, you’re dealing with tremendous amounts of data going into your database that you need to manage. And that’s a different problem, I think, than your typical relational database.

Chris Engelbert: Right. And I think you brought up a good example. Most of the time when we talk about devices as I said, I’m coming from a kind of similar background. It’s not like a device just sends you a single data point. When we talk about connected cars, they actually send thousands to 100,000 of data, position information, all kinds of metrics about the car itself, the electronics, and all that kind of stuff. And that comes down to quite a massive amount of data. So yeah, I agree with you. An actual time series database is super important. You mentioned columnar storage. Maybe you can say a few words about how that is different from, I guess, your Excel sheet.

Steven Sklar: Sure. Well, I guess I don’t know if I can necessarily compare it to my Excel spreadsheet, since that’s its own weird XML format, of course. But columnar data, I guess, is different from, let’s say, tabular data in your typical database. Tabular data is generally stored in the table format, where all of your columns and rows are kind of stored together versus columnar in a data store, each column is its own separate file. And that kind of makes it more efficient when you’re working in a time component, because time is generally your index. You’re not really indexing on a lot of things like primary key type things. You’re really just mostly indexing on time, like what happened at this point in time or over this time period. Because of that, we’re able to optimize the storage model to allow faster querying and also ingestion as well. And just for clarity, I’m not a core developer. I’m more of a cloud guy, so I hope I got those details right.

Chris Engelbert: I think you get the gist of it. But for QuestDB, that means it still looks like a tabular kind of database. So you still have your typical tables, but the individual columns are stored separately. Is that correct?

Steven Sklar: Correct.

Chris Engelbert: Ok, cool. So you said you’re a cloud guy. But as far as I know, you can install QuestDB locally, on-prem. You can install it into your own private cloud. I think there is the QuestDB cloud, which is the hosted platform. Well, not I guess. I know that it is. So maybe what is special about that? Does it have special features? Or is that mostly about the convenience of getting the managed database and getting rid of all the work you have to do when you run your own database, which can be complicated.

Steven Sklar: Absolutely. So actually, both. Obviously, you don’t have to manage it, and that’s great. You can leave it to the experts. That’s already worth the price of admission, I think. Additionally, we have the enterprise, the QuestDB enterprise, which has additional features. And all of those features, like role-based authentication and replication that’s coming soon and compression of your data on disk, are all things that you get automatically through the cloud.

Chris Engelbert: Ok, so that means I have to buy QuestDB enterprise when I want to have those features on prem, but I get them on the cloud right away.

Steven Sklar: Correct.

Chris Engelbert: Ok, cool. And correct me if I’m wrong, but I think from a client perspective, it uses the Postgres protocol. So any Postgres client is a QuestDB client, basically.

Steven Sklar: Absolutely, 100%.

Chris Engelbert: All right, so that means as an application developer, it’s super simple. I’ll basically drop in QuestDB instead of Postgres or anything else. So yeah, let’s talk a little bit about the cloud then. Maybe you can elaborate a little bit on the stack you’re running on. I’m not sure how much you can actually say, but anything you can share will probably be beneficial to everyone.

Steven Sklar: Oh, yeah, no problem. So we run on AWS. We run on Kubernetes. And we also– I guess one thing that I’m particularly proud of is an operator that I wrote to orchestrate all these databases. So our model, which is not necessarily your bread and butter Kubernetes deployment, is actually a single-tenant model. So we have one database per instance. And when you’re running Kubernetes, you kind of think, why do you care about what nodes you’re running on? Shouldn’t all that be abstracted away? And I would agree. We primarily use Kubernetes for its orchestration. But we want to avoid the noisy neighbor problem. We want to make it easy for users to change instances and instance types quickly. We want users to be able to shut down their database. And we still have the volume. So all these things, we could orchestrate them directly through Kubernetes. But we decided to use single-tenant nodes for that.

Chris Engelbert: Right. So let me see. So that means you’re using Kubernetes, as you said, mostly for orchestration, which means it’s more like if the database for some reason goes down or you have to have maintenance or you want to upgrade. It’s more about the convenience of having something managing that instead of doing it manually, right?

Steven Sklar: Exactly. And so I think we really thought, ok and this is a little bit before my time, but you could always roll your own cluster. But there’s so many things that are baked into Kubernetes these days, like monitoring and logs and metrics and networking and DNS and all these things that I don’t necessarily want to spend all my time on. I want to build a product. And by using Kubernetes and leveraging those components, we were able to build the cloud incredibly quickly, get us up and running, and then now expand upon it in the future. And that’s why, again, I mentioned the operator earlier. That was not originally part of the cloud. The cloud still has in a more limited capacity what we call a provisioner. So basically, if you’re interacting with the cloud and you make a new database, basically send a message to a queue, and that message will be picked up by a provisioner. And previously, that provisioner would say, ok, you want a database. Let’s make a stateful set. Let’s make a persistent volume. Let’s make these networking policies. Let’s do all of these things. If there’s an error, we can roll back. And we have retries. So it’s fairly sophisticated. But we ended up moving towards this operator model, which instead of the provisioner managing each of these individual components, it just manages one QuestDB resource. And our operator now handles all of those other little things. So I think that’s much more flexible for us in terms of, A, simplifying the provisioner code, and also by adding new features instead of having to work in this ever-growing web of Python. Now, it’s really just adding a snippet here and there to our reconciliation inside of everything.

Chris Engelbert: Right. You mentioned that the database is mostly written in Java. Most operators are written in Go. So what about your operator? Is it Java?

Steven Sklar: It’s Go.

Chris Engelbert: That’s fair. To be honest, I think the vast majority is. So you mentioned AWS. But I think that you are mostly talking about QuestDB Cloud, right? I think from a user’s perspective, do I use a helm chart or do I also use the operator to install it myself?

Steven Sklar: Yes. So the operator is actually only limited to the cloud because it’s built specifically to manage our own infrastructure with our own assumptions. We do have a helm chart and an open source image on Docker Hub. So I’ve used that plenty of times more than I can count.

Chris Engelbert: Ok, fair enough. So you basically support all cloud environments, all on-premise. But when you go for QuestDB Cloud, that is AWS, which I think is a fair decision. It is the biggest environment by far. So from a storage engine perspective, how much can you share? Can you talk about some fancy details? Like what kind of storage do you use? Do you use the local NVMe storage attached to the virtual machine or EBS volumes?

Steven Sklar: Yeah. So in our cloud, we have both actually NVMe and EBS. Most customers end up using EBS. And honestly, EBS is simpler to provision. But I do want to actually talk about some cool stuff that we’ve done with compression. Because we actually never implemented our own compression algorithm. We’re running on top of ZFS and using their compression algorithm. And we’ve actually– there’s an issue about data corruption, potentially, using mmap on ZFS, or rather a combination of mmap and traditional sys calls, the pwrite and preads. And what we do is actually identify when we’re running on ZFS and then decide to only use mmap calls to avoid this issue. And I think what we’ve done is pretty cool also on the storage side of orchestrating this whole thing. Because ZFS has its own notion of snapshots, its own notion of replication, its own notion of ZPools. And to simplify things, again, because we’re running this kind of I don’t necessarily want to say antiquated, but we’re running a single-tenant model, which might not be in vogue these days. What we actually do is we create one ZPool per volume and throw our QuestDB on the ZPool, enabling compression. And we’ve written our own CSI storage driver that sits in the middle of Kubernetes and other cloud providers so that we’re able to pass calls onto the cloud providers if, let’s say, we need to create or delete a volume using the cloud provider API. But when it comes to mounting specific ZFS and running ZFS-related commands, we actually take control of that and perform that in our own driver. I don’t know when this is going to be released, but I’m actually talking about this in Atlanta next week.

Chris Engelbert: No. Next week is a little bit early. Currently, I’m doing a couple of recordings, building a little bit of a pipeline. Because of conferences, the same thing will be in Paris for KubeCon next week. So there is a little bit. No, I don’t know the exact date. I think it’s in three or four weeks. So it’s a little bit out. But I guess your talk may be recorded. And public by then. So if that is the case, I’m happy if you drop it over and I put it into the show notes, people will love that. So you said when you run on, or when you detect that you run on ZFS, you use mmap. So you basically map the file into memory. And you change the memory positions directly. And then you fsync it. Or how does it work? How do I have to think about that?

Steven Sklar: Oh, boy. Ok. This is getting a little out of my– So you always use mmap regardless. But the issue is when you combine mmap with traditional sys calls on ZFS. And so what we do is we basically turn off those other sys calls and only use mmap when we’re writing to our files. In terms of the specifics of when we sync and things like that, I wish I could answer it right off of the bat.

Chris Engelbert: That’s totally fine. So just to sneak in a little shameless plug here, we should totally look into getting QuestDB running on simplyblock. I think that could be a really interesting thing. Because you mentioned ZFS, it’s basically ZFS on steroids. ZFS from my perspective, I mean, I’m running a ZFS file server in the basement. It saved me a couple of times with a broken hard disk. It’s just an incredible piece of technology. I agree with that. And it’s interesting because I’ve seen a lot of people running database on ZFS. And ZFS is all about reliability. It’s not necessarily about the highest performance. So it’s interesting you choose ZFS and you say, that’s perfect and works great for us. So because we’re almost running out of time, as I said earlier, 20 minutes is super short. When you look at cloud and databases and the world as a whole, whatever you want to talk about, what do you think is the next big trend or the current big trend? What is coming? What do you think would be really cool?

Steven Sklar: Yeah. So I guess I’m not going to talk about the existential crisis I’m having with Devin and the AI bots because it’s just a little depressing for me right now. But I think one thing that I’ve been seeing over the past few years that I find very interesting is this move away from cloud and back into your own data center. I think having control over your data is something that’s incredibly important to basically everyone now. And I think it’s to find a happy medium as a DevOps engineer between all the wonderful cloud APIs that you can use and going in the server room and kind of hooking things up. There’s probably a happy medium there somewhere that I think is an area that is going to start growing in the future. You see a lot of on-prem Kubernetes type things, Kubernetes on edge maybe. And for me, it presents a lot of interesting challenges because I spent most of my career in startups working on the cloud and understanding the fundamentals of not just the cloud APIs but operating systems and hardware a little bit. And so kind of figuring out where to draw that line in terms of what knowledge is transferable to this new paradigm will be interesting. And I think that’s a new trend that I’ve been focused on at least over the past couple of months.

Chris Engelbert: That is interesting that you mentioned that because it is kind of that. When the cloud became big, everyone wanted to move to the cloud because it was like “cheaper” in air quotes. And I think– well, the next step was serverless because it is yet even cheaper, which we all know is not necessarily true. And I see kind of the same thing. Now people realize that not every workload actually works perfectly or is a great fit for the cloud and people slowly start moving back or at least going back to not necessarily cloud instance but co-located servers or virtual machines, like plain virtual machines and just taking those for the workloads that do not need to be super scalable or super elastic. Well, thank you very much. That was very delightful. It was a pleasure having you.

Steven Sklar: Thank you.

Chris Engelbert: Thank you for being here and for the audience. I hope to– well, not see you, but hear you next time, next week. Thank you very much.

Steven Sklar: Thank you. Take care.

The post Building a Time Series Database in the Cloud with Steven Sklar from QuestDB (video + interview) appeared first on simplyblock.

]]>
Kubernetes: Future or Fad? https://www.simplyblock.io/blog/kubernetes-future-or-fad/ Wed, 10 Apr 2024 12:13:27 +0000 https://www.simplyblock.io/?p=295 Kubernetes has pretty much become synonymous with container orchestration, and all competition is either assimilated (or rewritten to be based on Kubernetes, see Openshift), or basically disappeared into the void (sorry Nomad). Does that mean that development will slow down now? Or is it the beginning of something even bigger? Maybe Kubernetes is on the […]

The post Kubernetes: Future or Fad? appeared first on simplyblock.

]]>
Kubernetes has pretty much become synonymous with container orchestration, and all competition is either assimilated (or rewritten to be based on Kubernetes, see Openshift), or basically disappeared into the void (sorry Nomad). Does that mean that development will slow down now? Or is it the beginning of something even bigger? Maybe Kubernetes is on the verge of becoming a generic name, just like Kleenex, or maybe a verb like “to google”.

Years ago, somebody asked me in an interview what I think of Docker, and if I see a future for containerization. At the time my answer was quick and easy. First of all, containerization wasn’t a new concept. BSD, Solaris, as well as other systems had them for years. They were kind of new to Linux though (at least in a widespread fashion), so they were here to stay. It was the next logical evolutionary step to virtualization. Docker, however, at least in my mind, was different. Towards Docker I simply answered “it’s the best tool we have today, but I hope it won’t be the final solution we can come up with.” While Docker turned around and is just coming back, the tooling we use today is unanimously built upon the specs defined by the open container initiative (OCI) and its OCI image format.

So what will the future hold for Kubernetes? Is it going to stay or is it going to step into the abyss and will it “just be another platform that was replaced by something else”, as Michael Levan wrote in The Future of Kubernetes .

The Rise of Kubernetes

Jokingly, when looking up container orchestration in the dictionary, you’ll probably find the synonym Kubernetes, but it took Kubernetes about a decade to get to where it is today.

Initially built by Google on the key learnings of Borg, Google’s internal container orchestration platform, K8s was released in September 2014. By the release of Kubernetes, Borg itself was already over a decade old. By 2013 many of the original team members of the Borg started to look into the next step. Project 7 was born.

At its release, Kubernetes was still using Docker underneath. A combination that most probably helped elevate Kubernetes popularity. Docker was extremely popular at the time, but people started to find insufficiencies when trying to organize and run large numbers of containers. Kubernetes was about to fix it. With its concepts of building blocks, independently deployed and the actors (or agents) it was easy to extend but still understandable. In addition, resources are declarative by nature (written as JSON or YAML files), which enables version control of those definitions.

Ever since the release, K8s enabled more and more use cases, hence more companies started using it. I think a major step for the adoption was the release of Helm in 2016 which simplified the deployment process for more complex applications and enabled an “out of the box” experience. Now Kubernetes was “easy” (don’t quote me on easy though!).

Today every cloud provider, and their mothers offer a managed Kubernetes environment. Due to a set of standard interfaces, those services are mostly interchangeable. One of the big benefits of Kubernetes. Anyhow, it’s only mostly. The small inconsistencies and implementations or performance differences may give you quite the time of your life. Not the good one though. But let’s call it “run everywhere”, because it mostly is.

A great overview of the full history of Kubernetes, with all major milestones, can be found at The History of Kubernetes by Ferenc Hámori .

Kubernetes, Love or Hate

When we look into the community, opinions on Kubernetes diverge. Many people point out the internal (yet hidden) complexity of Kubernetes. A complexity which only increases with new features and additional functionality (also third-party) being added. This complexity is real, a reason why folks like Kelsey Hightower or Abdelfettah Sghiouar call Kubernetes a platform to build platforms (listen to our Cloud Commute podcast episode with Abdelfettah ), meaning it should be used by the cloud providers (or company internal private cloud teams) to build a platform for container deployment, but it shouldn’t be used by the developers or just everyone. However, Kelsey also claimed that Kubernetes is a good place to start, not the endgame.

Kelsey Hightower on X: Kubernetes is a platform for building platforms. It's a better place to start; not the endgame.

On the other end of the spectrum you have people that refer to Kubernetes as the operating system of the cloud area. And due to the extensibility and feature richness, they’re probably not too far off. Modern operating systems have mostly one job, abstract away the underlying hardware and its features. That said, Kubernetes abstracts away many aspects of the cloud infrastructure and the operational processes necessary to run containers. In that sense, yes, Kubernetes is probably a Cloud OS. Especially since we started to see implementations of the Kubernetes running on operating systems other than Linux. Looking at you Microsoft.

If you’re interested in learning more about the idea of Kubernetes as a Cloud OS, Natan Yellin from Robusta Dev wrote a very insightful article named Why Kubernetes will sustain the next 50 years .

What are the next Steps?

The most pressing questions for Kubernetes as it stands today, what will be next? Where will it evolve? Are we close to the end of the line?

Looking back at Borg, a decade in, Google decided it was time to reiterate on orchestration, and build upon the lessons learned. Kubernetes is about to hit its 10 year anniversary soon. So does that mean it’s time for another iteration?

Many features in Kubernetes, such as secrets, were fine 10 years ago. Today we know that an encoded “secret” is certainly not enough. Temporary user accounts, OIDC, and similar technologies can and are integrated into K8s already, increasing the complexity of it.

Looking beyond Kubernetes, technology always runs in three phases, the beginning or adoption phase, the middle where everyone “has” to use it, and the end, where companies start to phase it out. Personally, I feel that Kubernetes is at its height right now, standing right in the middle.

That doesn’t give us any prediction about the time frame for hitting the end though. At the moment it looks like Kubernetes will keep going and growing for a while. It doesn’t show any signs of slowing down.

Other technologies, like micro virtual machines, using kata containers or Firecracker , are becoming more popular, offering higher isolation (hence security), but aren’t as efficient. The important element though, they offer a CRI compatible interface. Meaning, they can be used as an alternative runtime underneath Kubernetes.

In the not too distant future, I see K8s offering multiple runtime environments, just as it offers multiple storage solutions today. Enabling running simple services in normal containers, but moving services with higher isolation needs to micro VMs.

And there are other interesting developments, based on Kubernetes, too. Edgeless Systems implements a confidential computing solution, provided as a K8s distribution named Constellation. Confidential computing makes use of CPU and GPU features that help to hardware-encrypt memory, not only for the whole system memory space, but per virtual machine, or even per container. That enables a whole set of new use cases, with end to end encryption for highly confidential calculations and data processes. While it’s possible to use it outside Kubernetes, the orchestration and operation benefits of running those calculations inside containers, making them easy to deploy and update. If you want to learn more about Constellation, we had Moritz Eckert from Edgeless Systems in our podcast not too long ago.

Future or Fad?

So, does Kubernetes have a bright future and will stand for the next 50 years, or will we realize that it is not what we’re looking for very soon-ish.

If somebody would ask me today what I think about Kubernetes, I think I would answer similarly to my Docker answer. It is certainly the best tool we have today, making it the to-go container orchestration tool of today. Its ever increasing complexity makes it hard to see the same in the future though. I think there are a lot of new lessons learned again. It’s probably time for a new iteration. Not today, not tomorrow, but somewhere in the next few years.

Kubernetes: Future or Fad? Maybe this new iteration isn’t an all new tool, but Kubernetes 2.0, who knows – but something has to change. Technology doesn’t stand still, the (container) world is different from what it was 10 years ago.

If you asked somebody in the beginning of containerization, it was all about how containers have to be stateless, and what do we do today? We deploy databases into K8s, and we love it. Cloud-nativeness isn’t just stateless anymore, but I’d argue a good one-third of the container workloads may be stateful today (with ephemeral or persistent state), and it will keep increasing. The beauty of orchestration, automatic resource management, self-healing infrastructure, and everything in between is just too incredible to not use it for “everything”.

Anyhow, whatever happens to Kubernetes itself (maybe it will become an orchestration extension of the OCI?!), I think it will disappear from the eyes of the users. It (or its successor) will become the platform to build container runtime platforms. But to make that happen, debug features need to be made available. At the moment you have to look way too deep into Kubernetes or agent logs to find out and fix issues. The one who never had to find why a Let’s Encrypt certificate isn’t updating may raise hand now.

To bring it to a close, Kubernetes certainly isn’t a fad, but I strongly hope it’s not going to be our future either. At least not in its current incarnation.

The post Kubernetes: Future or Fad? appeared first on simplyblock.

]]>
Kelsey Hightower on X: Kubernetes is a platform for building platforms. It's a better place to start; not the endgame. Kubernetes: Future or Fad?
Production-grade Kubernetes PostgreSQL, Álvaro Hernández https://www.simplyblock.io/blog/production-grade-postgresql-on-kubernetes-with-alvaro-hernandez-tortosa-from-ongres/ Fri, 05 Apr 2024 12:13:27 +0000 https://www.simplyblock.io/?p=298 In this episode of the Cloud Commute podcast, Chris Engelbert is joined by Álvaro Hernández Tortosa, a prominent figure in the PostgreSQL community and CEO of OnGres. Álvaro shares his deep insights into running production-grade PostgreSQL on Kubernetes, a complex yet rewarding endeavor. The discussion covers the challenges, best practices, and innovations that make PostgreSQL […]

The post Production-grade Kubernetes PostgreSQL, Álvaro Hernández appeared first on simplyblock.

]]>
In this episode of the Cloud Commute podcast, Chris Engelbert is joined by Álvaro Hernández Tortosa, a prominent figure in the PostgreSQL community and CEO of OnGres. Álvaro shares his deep insights into running production-grade PostgreSQL on Kubernetes, a complex yet rewarding endeavor. The discussion covers the challenges, best practices, and innovations that make PostgreSQL a powerful database choice in cloud-native environments.

This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.

Key Takeaways

Q: Should you deploy PostgreSQL in Kubernetes?

Deploying PostgreSQL in Kubernetes is a strategic move for organizations aiming for flexibility and scalability. Álvaro emphasizes that Kubernetes abstracts the underlying infrastructure, allowing PostgreSQL to run consistently across various environments—whether on-premise or in the cloud. This approach not only simplifies deployments but also ensures that the database is resilient and highly available.

Q: What are the main challenges of running PostgreSQL on Kubernetes?

Running PostgreSQL on Kubernetes presents unique challenges, particularly around storage and network performance. Network disks, commonly used in cloud environments, often lag behind local disks in performance, impacting database operations. However, these challenges can be mitigated by carefully choosing storage solutions and configuring Kubernetes to optimize performance. Furthermore, managing PostgreSQL’s ecosystem—such as backups, monitoring, and high availability—requires robust tooling and expertise, which can be streamlined with solutions like StackGres.

Q: Why should you use Kubernetes for PostgreSQL?

Kubernetes offers a powerful platform for running PostgreSQL due to its ability to abstract infrastructure details, automate deployments, and provide built-in scaling capabilities. Kubernetes facilitates the management of complex PostgreSQL environments, making it easier to achieve high availability and resilience without being locked into a specific vendor’s ecosystem.

Q: Can I use PostgreSQL on Kubernetes with PGO?

Yes, you can. Tools like the PostgreSQL Operator (PGO) for Kubernetes simplify the management of PostgreSQL clusters by automating routine tasks such as backups, scaling, and updates. These operators are essential for ensuring that PostgreSQL runs efficiently on Kubernetes while reducing the operational burden on database administrators.

EP 06: Building and operating a production-grade PostgreSQL in Kubernetes

In addition to highlighting the key takeaways, it’s essential to provide deeper context and insights that enrich the listener’s understanding of the episode. By offering this added layer of information, we ensure that when you tune in, you’ll have a clearer grasp of the nuances behind the discussion. This approach enhances your engagement with the content and helps shed light on the reasoning and perspective behind the thoughtful questions posed by our host, Chris Engelbert. Ultimately, this allows for a more immersive and insightful listening experience.

Key Learnings

Q: How does Kubernetes scheduler work with PostgreSQL?

Kubernetes uses its scheduler to manage how and where PostgreSQL instances are deployed, ensuring optimal resource utilization. However, understanding the nuances of Kubernetes’ scheduling can help optimize PostgreSQL performance, especially in environments with fluctuating workloads.

simplyblock Insight: Leveraging simplyblock’s solution, users can integrate sophisticated monitoring and management tools with Kubernetes, allowing them to automate the scaling and scheduling of PostgreSQL workloads, thereby ensuring that database resources are efficiently utilized and downtime is minimized. Q: What is the best experience of running PostgreSQL in Kubernetes?

The best experience comes from utilizing a Kubernetes operator like StackGres, which simplifies the deployment and management of PostgreSQL clusters. StackGres handles critical functions such as backups, monitoring, and high availability out of the box, providing a seamless experience for both seasoned DBAs and those new to PostgreSQL on Kubernetes.

simplyblock Insight: By using simplyblock’s Kubernetes-based solutions, you can further enhance your PostgreSQL deployments with features like dynamic scaling and automated failover, ensuring that your database remains resilient and performs optimally under varying loads. Q: How does disk access latency impact PostgreSQL performance in Kubernetes?

Disk access latency is a significant factor in PostgreSQL performance, especially in Kubernetes environments where network storage is commonly used. While network storage offers flexibility, it typically has higher latency compared to local storage, which can slow down database operations. Optimizing storage configurations in Kubernetes is crucial to minimizing latency and maintaining high performance.

simplyblock Insight: simplyblock’s advanced storage solutions for Kubernetes can help mitigate these latency issues by providing optimized, low-latency storage options tailored specifically for PostgreSQL workloads, ensuring your database runs at peak efficiency. Q: What are the advantages of clustering in PostgreSQL on Kubernetes?

Clustering PostgreSQL in Kubernetes offers several advantages, including improved fault tolerance, load balancing, and easier scaling. Kubernetes operators like StackGres enable automated clustering, which simplifies the process of setting up and managing a highly available PostgreSQL cluster.

simplyblock Insight: With simplyblock, you can easily deploy clustered PostgreSQL environments that automatically adjust to your workload demands, ensuring continuous availability and optimal performance across all nodes in your cluster.

Additional Nugget of Information

Q: What are the advantages of clustering in Postgres? A: Clustering in PostgreSQL provides several benefits, including improved performance, high availability, and better fault tolerance. Clustering allows multiple database instances to work together, distributing the load and ensuring that if one node fails, others can take over without downtime. This setup is particularly advantageous for large-scale applications that require high availability and resilience. Clustering also enables better scalability, as you can add more nodes to handle increasing workloads, ensuring consistent performance as demand grows.

Conclusion

Deploying PostgreSQL on Kubernetes offers powerful capabilities but comes with challenges. Álvaro Hernández Tortosa highlights how StackGres simplifies this process, enhancing performance, ensuring high availability, and making PostgreSQL more accessible. With the right tools and insights, you can confidently manage PostgreSQL in a cloud-native environment.

Full Video Transcript

Chris Engelbert: Welcome to this week’s episode of Cloud Commute podcast by simplyblock. Today, I have another incredible guest, a really good friend, Álvaro Hernández from OnGres. He’s very big in the Postgres community. So hello, and welcome, Álvaro.

Álvaro Hernández Tortosa: Thank you very much, first of all, for having me here. It’s an honor.

Chris Engelbert: Maybe just start by introducing yourself, who you are, what you’ve done in the past, how you got here. Well, except me inviting you.

Álvaro Hernández Tortosa: OK, well, I don’t know how to describe myself, but I would say, first of all, I’m a big nerd, big fan of open source. And I’ve been working with Postgres, I don’t know, for more than 20 years, 24 years now. So I’m a big Postgres person. There’s someone out there in the community that says that if you say Postgres three times, I will pop up there. It’s kind of like Superman or Batman or these superheroes. No, I’m not a superhero. But anyway, professionally, I’m the founder and CEO of a company called OnGres. Let’s guess what it means, On Postgres. So it’s pretty obvious what we do. So everything revolves around Postgres, but in reality, I love all kinds of technology. I’ve been working a lot with many other technologies. I know you because of being a Java programmer, which is kind of my hobby. I love programming in my free time, which almost doesn’t exist. But I try to get some from time to time. And everything related to technology in general, I’m also a big fan and supporter of open source. I have contributed and keep contributing a lot to open source. I also founded some open source communities, like for example, I’m a Spaniard. I live in Spain. And I founded Debian Spain, an association like, I don’t know, 20 years ago. More recently, I also founded a foundation, a nonprofit foundation also in Spain called Fundación PostgreSQL. Again, guess what it does? And I try to engage a lot with the open source communities. We, by the way, organized a conference for those who are interested in Postgres in the magnificent island of Ibiza in the Mediterranean Sea in September this year, 9th to 11th September for those who want to join. So yeah, that’s probably a brief intro about myself.

Chris Engelbert: All right. So you are basically the Beetlejuice of Postgres. That’s what you’re saying.

Álvaro Hernández Tortosa: Beetlejuice, right. That’s more upper bid than superheroes. You’re absolutely right.

Chris Engelbert: I’m not sure if he is a superhero, but he’s different at least. Anyway, you mentioned OnGres. And I know OnGres isn’t really like the first company. There were quite a few before, I think, El Toro, a database company.

Álvaro Hernández Tortosa: Yes, Toro DB.

Chris Engelbert: Oh, Toro DB. Sorry, close, close, very close. So what is up with that? You’re trying to do a lot of different things and seem to love trying new things, right?

Álvaro Hernández Tortosa: Yes. So I sometimes define myself as a 0.x serial entrepreneur, meaning that I’ve tried several ventures and sold none of them. But I’m still trying. I like to try to be resilient, and I keep pushing the ideas that I have in the back of my head. So yes, I’ve done several ventures, all of them, around certain patterns. So for example, you’re asking about Toro DB. Toro DB is essentially an open source software that is meant to replace MongoDB with, you guessed it, Postgres, right? There’s a certain pattern in my professional life. And Toro DB was. I’m speaking in the past because it no longer unfortunately maintained open source projects. We moved on to something else, which is OnGres. But the idea of Toro DB was to essentially replicate from Mongo DB live these documents and in the process, real time, transform them into a set of relational tables that got stored inside of a Postgres database. So it enabled you to do SQL queries on your documents that were MongoDB. So think of a MongoDB replica. You can keep your MongoDB class if you want, and then you have all the data in SQL. This was great for analytics. You could have great speed ups by normalizing data automatically and then doing queries with the power of SQL, which obviously is much broader and richer than query language MongoDB, especially for analytics. We got like 100 times faster on most queries. So it was an interesting project.

Chris Engelbert: So that means you basically generated the schema on the fly and then generated the table for that schema specifically? Interesting.

Álvaro Hernández Tortosa: Yeah, it was generating tables and columns on the fly.

OnGres StackGres: Operator for Production-Grade PostgreSQL on Kubernetes

Chris Engelbert: Right. Ok, interesting. So now you’re doing the OnGres thing. And OnGres has, I think, the main product, StackGres, as far as I know. Can you tell a little bit about that?

Álvaro Hernández Tortosa: Yes. So OnGres, as I said, means On Postgres. And one of our goals in OnGres is that we believe that Postgres is a fantastic database. I don’t need to explain that to you, right? But it’s kind of the Linux kernel, if I may use this parallel. It’s a bit bare bones. You need something around it. You need a distribution, right? So Postgres is a little bit the same thing. The core is small, it’s fantastic, it’s very featureful, it’s reliable, it’s trustable. But it needs tools around it. So our vision in OnGres is to develop this ecosystem around this Postgres core, right? And one of the things that we experience during our professional lifetime is that Postgres requires a lot of tools around it. It needs monitoring, it needs backups, it needs high availability, it needs connection pooling.

By the way, do not use Postgres without connection pooling, right? So you need a lot of tools around. And none of these tools come from a core. You need to look into the ecosystem. And actually, this is good and bad. It’s good because there’s a lot of options. It’s bad because there’s a lot of options. Meaning which one to choose, which one is good, which one is bad, which one goes with a good backup solution or the good monitoring solution and how you configure them all. So this was a problem that we coined as a stack problem. So when you really want to run Postgres in production, you need the stack on top of Postgres, right? To orchestrate all these components.

Now, the problem is that we’ve been doing this a lot of time for our customers. Typically, we love infrastructure scores, right? And everything was done with Ansible and similar tools and Terraform for infrastructure and Ansible for orchestrating these components. But the reality is that every environment into which we looked was slightly different. And we can just take our Ansible code and run it. You’ve got this stack. But now the storage is different. Your networking is different. Your entry point. Here, one is using virtual IPs. That one is using DNS. That one is using proxies. And then the compute is also somehow different. And it was not reusable. We were doing a lot of copy, paste, modify, something that was not very sustainable. At some point, we started thinking, is there a way in which we can pack this stack into a single deployable unit that we can take essentially anywhere? And the answer was Kubernetes. Kubernetes provides us this abstraction where we can abstract away this compute, this storage, this bit working and code against a programmable API that we can indeed create this package. So that’s a StackGres.

StackGres is the stack of components you need to run production Postgres, packaging a way that is uniform across any environment where you want to run it, cloud, on-prem, it doesn’t matter. And is production ready! It’s packaged at a very, very high level. So basically you barely need, I would say, you don’t need Postgres knowledge to run a production ready enterprise quality Postgres cluster introduction. And that’s the main goal of StackGres.

Chris Engelbert: Right, right. And as far as I know, I think it’s implemented as a Kubernetes operator, right?

Álvaro Hernández Tortosa: Yes, exactly.

Chris Engelbert: And there’s quite a few other operators as well. But I know that StackGres has some things which are done slightly differently. Can you talk a little bit about that? I don’t know how much you wanna actually make this public right now.

Álvaro Hernández Tortosa: No, actually everything is open source. Our roadmap is open source, our issues are open source. I’m happy to share everything. Well, first of all, what I would say is that the operator pattern is essentially these controllers that take actions on your cluster and the CRDs. We gave a lot of thought to these CRDs. I would say that a lot of operators, CRDs are kind of a byproduct. A second thought, “I have my objects and then some script generates the CRDs.” No, we said CRDs are our user-facing API. The CRDs are our extended API. And the goal of operators is to abstract the way and package business logic, right? And expose it with a simple user interface.

So we designed our CRDs to be very, very high level, very amenable to the user, so that again, you don’t require any Postgres expertise. So if you look at the CRDs, in practical terms, the YAMLs, right? The YAMLs that you write to deploy something on StackGres, they should be able to deploy, right? You could explain to your five-year-old kid and your five-year-old kid should be able to deploy Postgres into a production-quality cluster, right? And that’s our goal. And if we didn’t fulfill this goal, please raise an issue on our public issue tracker on GitLab because we definitely have failed if that’s not true. So instead of focusing on the Postgres usual user, very knowledgeable, very high level, most operators focused on low level CRDs and they require Postgres expertise, probably a lot. We want to make Postgres more mainstream than ever, right? Postgres increases in popularity every year and it’s being adopted by more and more organizations, but not everybody’s a Postgres expert. We want to make Postgres universally accessible for everyone. So one of the things is that we put a lot of effort into this design. And we also have, instead of like a big one, gigantic CRD. We have multiple. They actually can be attached like in an ER diagram between them. So you understand relationships, you create one and then you reference many times, you don’t need to restart or reconfigure the configuration files. Another area where I would say we have tried to do something is extensions. Postgres extensions is one of the most loved, if not the most loved feature, right?

And StackGres is the operator that arguably supports the largest number of extensions, over 200 extensions of now and growing. And we did this because we developed a custom solution, which is also open source by StackGres, where we can load extensions dynamically into the cluster. So we don’t need to build you a fat container with 200 images and a lot of security issues, right? But rather we deploy you a container with no extensions. And then you say, “I want this, this, this and that.” And then they will appear in your cluster automatically. And this is done via simple YAML. So we have a very powerful extension mechanism. And the other thing is that we not only expose the usual CRD YAML interface for interacting with StackGres, it’s more than fine and I love it, but it comes with a fully fledged web console. Not everybody also likes the command line or GitOps approach. We do, but not everybody does. And it’s a fully fledged web console which also supports single sign-on, where you can integrate with your AD, with your OIDC provider, anything that you want. Has detailed fine-grained permissions based on Kubernetes RBAC. So you can say, “Who can create clusters, who can view configurations, who can do anything?” And last but not least, there’s a REST API. So if you prefer to automate and integrate with another kind of solution, you can also use the REST API and create clusters and manage clusters via the REST API. And these three mechanisms, the YAML files, CRDs, the REST API and the web console are fully interchangeable. You can use one for one operation, the other one for everything goes back to the same. So you can use any one that you want.

And lately we also have added sharding. So sharding scales out with solutions like Citus, but we also support foreign interoperability, Postgres with partitioning and Apache ShardingSphere. Our way is to create a cluster of multiple instances. Not only one primary and one replica, but a coordinator layer and then shards, and it shares a coordinator of the replica. So typically dozens of instances, and you can create them with a simple YAML file and very high-level description, requires some knowledge and wires everything for you. So it’s very, very convenient to make things simple.

Chris Engelbert: Right. So the plugin mechanism or the extension mechanism, that was exactly what I was hinting at. That was mind-blowing. I’ve never seen anything like that when you showed it last year in Ibiza. The other thing that is always a little bit of like a hat-scratcher, I think, for a lot of people when they hear that a Kubernetes operator is actually written in Java. I think RedHat built the original framework. So it kind of makes sense that RedHat is doing that, I think the original framework was a Go library. And Java would probably not be like the first choice to do that. So how did that happen?

Álvaro Hernández Tortosa: Well, at first you’re right. Like the operator framework is written in Go and there was nothing else than Go at the time. So we were looking at that, but our team, we had a team of very, very senior Java programmers and none of them were Go programmers, right? But I’ve seen the Postgres community and all the communities that people who are kind of more in the DevOps world, then switching to Go programmers is a bit more natural, but at the same time, they are not senior from a Go programming perspective, right? The same would have happened with our team, right? They would switch from Java to Go. They would have been senior in Go, obviously, right? So it would have taken some time to develop those skills. On the other hand, we looked at what is the technology behind, what is an operator? An operator is no more than essentially an HTTP server that receives callbacks from Kubernetes and a client because it makes calls to Kubernetes. And HTTP clients and servers can read written in any language. So we look at the core, how complicated this is and how much does this operator framework bring to you? How we saw that it was not that much.

And actually something, for example, just mentioned before, the CRDs are kind of generated from your structures and we really wanted to do the opposite way. This is like the database. You use an ORM to read your database existing schema that we develop with all your SQL capabilities or you just create an object and let that generate a database. I prefer the format. So we did the same thing with the CRDs, right? And we wanted to develop them. So Java was more than okay to develop a Kubernetes operator and our team was expert in Java. So by doing it in Java, we were able to be very efficient and deliver a lot of value, a lot of features very, very fast without having to retrain anyone, learn a new language, or learn new skills. On top of this, there’s sometimes a concern that Java requires a JVM, which is kind of a heavy environment, right? And consumes memory and resources, and disk. But by default, StackGres uses a compilation technology and will build a whole project around it called GraalVM. And this allows you to generate native images that are indistinguishable from any other binary, Linux binary you can have with your system. And we deploy StackGres with native images. You can also switch JVM images if you prefer. We over expose both, but by default, there are native images. So at the end of the day, StackGres is several megabytes file, Linux binary and the container and that’s it.

Chris Engelbert: That makes sense. And I like that you basically pointed out that the efficiency of the existing developers was much more important than like being cool and going from a new language just because everyone does. So we talked about the operator quite a bit. Like what are your general thoughts on databases in the cloud or specifically in Kubernetes? What are like the issues you see, the problems running a database in such an environment? Well, it’s a wide topic, right? And I think one of the most interesting topics that we’re seeing lately is a concern about cost and performance. So there’s kind of a trade off as usual, right?

Álvaro Hernández Tortosa: There’s a trade off between the convenience I want to run a database and almost forget about it. And that’s why you switched to a cloud managed service which is not always true by the way, because forgetting about it means that nobody’s gonna then back your database, repack your tables, right? Optimize your queries, analyze if you haven’t used indexes. So if you’re very small, that’s more than okay. You can assume that you don’t need to touch your database even if you grow over a certain level, you’re gonna need the same DBAs, the same, at least to operate not the basic operations of the database which are monitoring, high availability and backups. So those are the three main areas that a managed service provides to you.

But so there’s convenience, but then there’s an additional cost. And this additional cost sometimes is quite notable, right? So it’s typically around 80% premium on a N+1/N number of instances because sometimes we need an extra even instance for many cloud services, right? And that multiply by 1.8 ends up being two point something in the usual case. So you’re overpaying that. So you need to analyze whether this is good for you from this perspective of convenience or if you want to have something else. On the other hand, almost all cloud services use network disks. And these network disks are very good and have improved performance a lot in the last years, but still they are far from the performance of a local drive, right? And running databases with local drives has its own challenges, but they can be addressed. And you can really, really move the needle by kind of, I don’t know if that’s the right term to call it self-hosting, but this trend of self-hosting, and if we could marry the simplicity and the convenience of managed services, right?

With the ability of running on any environment and running on any environment at a much higher performance, I think that’s kind of an interesting trend right now and a good sweet spot. And Kubernetes, to try to marry all the terms that you mentioned in the question, actually is one driver towards this goal because it enables us infrastructure independence and it enables both network disks and local disks and equally the same. And it’s kind of an enabler for this pattern that I see more trends, more trends as of now, more important and one that definitely we are looking forward to.

Chris Engelbert: Right, I like that you pointed out that there’s ways to address the local storage issues, just shameless plug, we’re actually working on something.

Álvaro Hernández Tortosa: I heard something.

The Biggest Trend in Containers?

Chris Engelbert: Oh, you heard something. (laughing) All right, last question because we’re also running out of time. What do you see as the biggest trend right now in containers, cloud, whatever? What do you think is like the next big thing? And don’t say AI, everyone says that.

Álvaro Hernández Tortosa: Oh, no. Well, you know what? Let me do a shameless plug here, right?

Chris Engelbert: All right. I did one. (laughing)

Álvaro Hernández Tortosa: So there’s a technology we’re working on right now that works for our use case, but will work for many use cases also, which is what we’re calling dynamic containers. So containers are essential as something that is static, right? You build a container, you have a build with your Dockerfile, whatever you use, right? And then that image is static. It is what it is. Contains the layers that you specified and that’s all. But if you look at any repository in Docker Hub, right? There’s plenty of tags. You have what, for example, Postgres. There’s Postgres based on Debian. There’s Postgres based on Alpine. There’s Postgres with this option. Then you want this extension, then you want this other extension. And then there’s a whole variety of images, right? And each of those images needs to be built independently, maintained, updated independently, right? But they’re very orthogonal. Like upgrading the Debian base OS has nothing to do with the Postgres layer, has nothing to do with the timescale extension, has nothing to do with whether I want the debug symbols or not. So we’re working on technology with the goal of being able to, as a user, express any combination of items I want for my container and get that container image without having to rebuild and maintain the image with the specific parameters that I want.

Chris Engelbert: Right, and let me guess, that is how the Postgres extension stuff works.

Álvaro Hernández Tortosa: It is meant to be, and then as a solution for the Postgres extensions, but it’s actually quite broad and quite general, right? Like, for example, I was discussing recently with some folks of the OpenTelemetry community, and the OpenTelemetry collector, which is the router for signals in the OpenTelemetry world, right? Has the same architecture, has like around 200 plugins, right? And you don’t want a container image with those 200 plugins, which potentially, because many third parties may have some security vulnerabilities, or even if there’s an update, you don’t want to update all those and restart your containers and all that, right? So why don’t you kind of get a container image with the OpenTelemetry collector with this source and this receiver and this export, right? So that’s actually probably more applicable. Yeah, I think that makes sense, right? I think that is a really good end, especially because the static containers in the past were in the original idea was that the static gives you some kind of consistency and some security on how the container looks, but we figured out over time, that is not the best solution. So I’m really looking forward to that being probably a more general thing. To be honest, actually the idea, I call it dynamic containers, but in reality, from a user perspective, they’re the same static as before. They are dynamic from the registry perspective.

Chris Engelbert: Right, okay, fair enough. All right, thank you very much. It was a pleasure like always talking to you. And for the other ones, I see, hear, or read you next week with my next guest. And thank you to Álvaro, thank you for being here. It was appreciated like always.

Álvaro Hernández Tortosa: Thank you very much.

The post Production-grade Kubernetes PostgreSQL, Álvaro Hernández appeared first on simplyblock.

]]>
EP 06: Building and operating a production-grade PostgreSQL in Kubernetes
What is a Kubernetes Persistent Volume? https://www.simplyblock.io/blog/what-is-a-kubernetes-persistent-volume/ Wed, 03 Apr 2024 12:13:37 +0000 https://www.simplyblock.io/?p=87 A persistent volume is a slice of storage provisioned by a Kubernetes administrator that can be attached and mounted to pods. Like everything in Kubernetes, it is a resource inside the cluster, and its lifecycle is either bound to the lifecycle of the pod or can survive pod deletions. It is backed by storage provided […]

The post What is a Kubernetes Persistent Volume? appeared first on simplyblock.

]]>
A persistent volume is a slice of storage provisioned by a Kubernetes administrator that can be attached and mounted to pods. Like everything in Kubernetes, it is a resource inside the cluster, and its lifecycle is either bound to the lifecycle of the pod or can survive pod deletions. It is backed by storage provided through the container storage interface (CSI).

Stateful Workloads and Data Persistence

Simplyblock's architecture showing how simplyblock is used with Kubernetes CSI
Simplyblock CSI Driver for Kubernetes Persistent Volumes

Originally designed to be stateless, containers were supposed to also be ephemeral and lightweight. They were designed to “boot” quickly and be small, maybe a few megabytes in size, sharing much of their host operating system.

This design quickly became a hassle, and people realized that you often have to persist data between container restarts. Some of this storage can be ephemeral (living until the pod ceases to exist) or persistent (which will stay alive indefinitely). This is specifically important for applications like databases or logs, as well as many other types of applications that need to hold serialized session information or similar states.

In general, the bigger the container-based landscape, the higher the chance of having stateful workloads in your deployments, especially with large Kubernetes deployments consisting of hundreds of nodes.

Using the CSI and its storage plugins, it is possible and recommended (at least by us) to separate the storage and compute infrastructure. This disaggregated architecture enables independent scalability and allows users to choose the best tool for the job.

To bind a persistent volume to a Kubernetes container, a so-called persistent volume is required that consists of the container’s storage requirement definition. That includes the StorageClass, basically what type of backing storage is requested (this is normally a 1:1 mapping to a specific CSI implementation, like simplyblock’s driver, or a combination of the CSI implementation and some characteristics, such as a performance policy), but also the requested size and lifecycle binding of the (persistent) volume claim and its persistent volume.

Why are Kubernetes Persistent Volumes Essential?

One could ask why not just directly bind to directories or other storage backends. The whole point of Kubernetes is to abstract away the underlying operating system, or better said, the whole environment outside of Kubernetes. This enables developers and DevOps folks to make specific assumptions about the environment, bringing development and production environments closer together. Especially for developers, this is a big win since debugging is easier than ever before.

To achieve this abstraction, applications, and containers should not make assumptions about the underlying storage (or any other resource). Hence, provisioning storage is part of the actual deployment process.

Managing persistent volume resources this way has quite a few benefits:

  1. Decoupling: PVs decouple the storage provisioning from the pod or container lifecycle, allowing more flexibility and independence in managing storage resources.
  2. Storage Persistence: PVs offer persistent storage, which survives pod terminations and rescheduling.
  3. Dynamic Provisioning: PVs can be dynamically provisioned, enabling automatic on-demand creation and management according to the application requirements.
  4. Portability: While not strictly true for all backing storage types, PVs enable the separation of concerns by providing access to local and remote storage options.
  5. Resource Management: PVs, or better said, their storage class implementations, can provide better resource management by virtualizing the storage and sharing access to the same backing storage, hence reducing storage redundancy and optimizing resource allocation.

Types of Persistent Storage in Kubernetes

Kubernetes enables a wide variety of application deployments. Therefore, it also offers many storage options. While there are many more, we want to focus on the two specific ones that enable Kubernetes persistent storage, speed, and high availability (at least to some extent).

Kubernetes persistent volumes via storage plugins (CSI drivers) are your best bet for persistent storage and offer the widest variety of backing storage. Implementations can either offer access to an independently running storage cluster or appliance (disaggregated) or use the local storage of the Kubernetes worker nodes and offer it as a shared, clustered resource or just as an enhanced version of local mounts (hyperconverged). This type of storage is specifically interesting to persistent volumes since many companies provide enhanced features such as immediate snapshots, clones, thin provisioning, and more.

Editorial: Simplyblock provides the industry-leading cloud-native Kubernetes Enterprise storage to enable IO-intensive and latency-sensitive stateful workloads (like databases) that need increased reliability, speed, and cost-effectiveness. Additionally, simplyblock offers advanced deployment options, which include hyperconverged, disaggregated, and a combination of the two.

HostPath volumes are backed by a directory on the host machine. They stay intact when the pod or container is terminated or even deleted and can be reattached to a new pod or container as long as the new one is scheduled and created on the same Kubernetes worker node. This offers some type of durability but comes with the additional complexity of ensuring containers live on specific workers. While tainting nodes may work, a globally available (in the sense of across the Kubernetes cluster) backing storage will be easier and enable better resource utilization. This type is often used for speed requirements, taking a hit on high availability and fault tolerance.

Editorial: Simplyblock can provide high IOPS and predictable low latency without sacrificing high availability and fault tolerance.

How do Persistent Volumes Work?

Persistent Volumes are implemented using a combination of Kubernetes components and container storage interface (CSI)- backed storage plugins or drivers.

Internally, persistent storage is represented as a combination of persistent volumes (the actual logical storage entity) and persistent volume claims (the “assignment” of a volume to a container request for storage). Both types are Kubernetes resources and can be created, updated, or deleted using API calls or CRDs (custom resource definition).

Using a StorageClass, Kubernetes’ persistent volume claim (PVC) requests a specific backing implementation to provide a persistent volume. The storage class always defines which kind of storage implementation is used. However, a storage plugin may provide multiple storage classes and encode specific performance or other characteristics, such as high availability or levels of fault tolerance. The specific details depend on the storage plugin.

When the persistent volume claim is provided with a matching persistent volume, Kubernetes will mount the persistent volume into the pod, enabling access to the PV from inside the container. Additionally, the PVC may restrict or permit reading or writing data into the storage. These permissions can also enable features like simultaneously attaching the same persistent volume to multiple containers, provided that the backing implementation offers such functionality.

Last but not least, when the PV’s lifecycle is bound to the PVC, automatic resource management, like dynamic allocation and deallocation, happens when the PVC is created or deleted. Depending on the type of deployment, this can be a real benefit or completely useless. It’s up to you to decide when you want automatic resource management and when not—that’s the beauty of declarative configuration.

Your Stateful Workload

Persistent volumes (PVs) are critical in deploying stateful workloads into Kubernetes environments. By abstracting storage provisioning and management, PVs enable seamless integration of many storage resources, enabling the “use the best tool for the job” rule. That said, not all applications may have the same storage requirements, hence providing multiple storage classes can be required. Kubernetes makes this specifically easy with its storage plugin design and the definition of storage classes. Understanding persistent volumes is essential for Kubernetes administrators and developers looking to optimize storage management within their containerized environments.

Simplyblock offers a flexible, enterprise-ready Kubernetes storage solution that combines high performance, predictable low latency, and high availability, as well as fault tolerance. In addition, simplyblock provides advanced features such as immediate snapshots (copy-on-write), local and remote clones, cluster replication, thin provisioning, storage overcommitment, and zero downtime scalability. Do you want to learn more about simplyblock, or are you ready to use it ?

The post What is a Kubernetes Persistent Volume? appeared first on simplyblock.

]]>
Simplyblock-Simplified-Architectures
How the CSI (Container Storage Interface) Works https://www.simplyblock.io/blog/how-the-csi-container-storage-interface-works/ Fri, 29 Mar 2024 12:13:27 +0000 https://www.simplyblock.io/?p=302 If you work with persistent storage in Kubernetes, maybe you’ve seen articles about how to migrate from in-tree to CSI volumes, but aren’t sure what all the fuss is about? Or perhaps you’re trying to debug a stuck VolumeAttachment that won’t unmount from a node, holding up your important StatefulSet rollout? A clear understanding of […]

The post How the CSI (Container Storage Interface) Works appeared first on simplyblock.

]]>
If you work with persistent storage in Kubernetes, maybe you’ve seen articles about how to migrate from in-tree to CSI volumes, but aren’t sure what all the fuss is about? Or perhaps you’re trying to debug a stuck VolumeAttachment that won’t unmount from a node, holding up your important StatefulSet rollout? A clear understanding of what the Container Storage Interface (or CSI for short) is and how it works will give you confidence when dealing with persistent data in Kubernetes, allowing you to answer these questions and more!

Editorial: This blog post is written by a guest author, Steven Sklar from QuestDB. It appeared first on his private blog at sklar.rocks. We appreciate his contributions to the Kubernetes ecosystem and wanted to thank him for letting us repost his article. Steven, you rock! 🔥

The Container Storage Interface is an API specification that enables developers to build custom drivers which handle the provisioning, attaching, and mounting of volumes in containerized workloads. As long as a driver correctly implements the CSI API spec, it can be used in any supported Container Orchestration system, like Kubernetes. This decouples persistent storage development efforts from core cluster management tooling, allowing for the rapid development and iteration of storage drivers across the cloud native ecosystem.

In Kubernetes, the CSI has replaced legacy in-tree volumes with a more flexible means of managing storage mediums. Previously, in order to take advantage of new storage types, one would have had to upgrade an entire cluster’s Kubernetes version to access new PersistentVolume API fields for a new storage type. But now, with the plethora of independent CSI drivers available, you can add any type of underlying storage to your cluster instantly, as long as there’s a driver for it.

But what if existing drivers don’t provide the features that you require and you want to build a new custom driver? Maybe you’re concerned about the ramifications of migrating from in-tree to CSI volumes? Or, you simply want to learn more about how persistent storage works in Kubernetes? Well, you’re in the right place! This article will describe what the CSI is and detail how it’s implemented in Kubernetes.

It’s APIs all the way down

Like many things in the Kubernetes ecosystem, the Container Storage Interface is actually just an API specification. In the container-storage-interface/spec GitHub repo, you can find this spec in 2 different versions:

  1. A protobuf file that defines the API schema in gRPC terms
  2. A markdown file that describes the overall system architecture and goes into detail about each API call

What I’m going to discuss in this section is an abridged version of that markdown file, while borrowing some nice ASCII diagrams from the repo itself!

Architecture

A CSI Driver has 2 components, a Node Plugin and a Controller Plugin. The Controller Plugin is responsible for high-level volume management; creating, deleting, attaching, detatching, snapshotting, and restoring physical (or virtualized) volumes. If you’re using a driver built for a cloud provider, like EBS on AWS, the driver’s Controller Plugin communicates with AWS HTTPS APIs to perform these operations. For other storage types like NFS, EXSI, ZFS, and more, the driver sends these requests to the underlying storage’s API endpoint, in whatever format that API accepts.

Editorial: The same is true for simplyblock. Simplyblock’s CSI driver implements all necessary, and following described calls, making it a perfect drop-in replacement for Amazon EBS. If you want to learn more read: Why simplyblock.

On the other hand, the Node Plugin is responsible for mounting and provisioning a volume once it’s been attached to a node. These low-level operations usually require privileged access, so the Node Plugin is installed on every node in your cluster’s data plane, wherever a volume could be mounted.

The Node Plugin is also responsible for reporting metrics like disk usage back to the Container Orchestration system (referred to as the “CO” in the spec). As you might have guessed already, I’ll be using Kubernetes as the CO in this post! But what makes the spec so powerful is that it can be used by any container orchestration system, like Nomad for example, as long as it abides by the contract set by the API guidelines.

The specification doc provides a few possible deployment patterns, so let’s start with the most common one.

CO "Master" Host
+-------------------------------------------+
|                                           |
|  +------------+           +------------+  |
|  |     CO     |   gRPC    | Controller |  |
|  |            +----------->   Plugin   |  |
|  +------------+           +------------+  |
|                                           |
+-------------------------------------------+

CO "Node" Host(s)
+-------------------------------------------+
|                                           |
|  +------------+           +------------+  |
|  |     CO     |   gRPC    |    Node    |  |
|  |            +----------->   Plugin   |  |
|  +------------+           +------------+  |
|                                           |
+-------------------------------------------+ 

Since the Controller Plugin is concerned with higher-level volume operations, it does not need to run on a host in your cluster’s data plane. For example, in AWS, the Controller makes AWS API calls like ec2:CreateVolume, ec2:AttachVolume, or ec2:CreateSnapshot to manage EBS volumes. These functions can be run anywhere, as long as the caller is authenticated with AWS. All the CO needs is to be able to send messages to the plugin over gRPC. So in this architecture, the Controller Plugin is running on a “master” host in the cluster’s control plane.

On the other hand, the Node Plugin must be running on a host in the cluster’s data plane. Once the Controller Plugin has done its job by attaching a volume to a node for a workload to use, the Node Plugin (running on that node) will take over by mounting the volume to a well-known path and optionally formatting it. At this point, the CO is free to use that path as a volume mount when creating a new containerized process; so all data on that mount will be stored on the underlying volume that was attached by the Controller Plugin. It’s important to note that the Container Orchestrator, not the Controller Plugin, is responsible for letting the Node Plugin know that it should perform the mount.

Volume Lifecycle

The spec provides a flowchart of basic volume operations, also in the form of a cool ASCII diagram:

   CreateVolume +------------+ DeleteVolume
 +------------->|  CREATED   +--------------+
 |              +---+----^---+              |
 |       Controller |    | Controller       v
+++         Publish |    | Unpublish       +++
|X|          Volume |    | Volume          | |
+-+             +---v----+---+             +-+
                | NODE_READY |
                +---+----^---+
               Node |    | Node
            Publish |    | Unpublish
             Volume |    | Volume
                +---v----+---+
                | PUBLISHED  |
                +------------+

Mounting a volume is a synchronous process: each step requires the previous one to have run successfully. For example, if a volume does not exist, how could we possibly attach it to a node?

When publishing (mounting) a volume for use by a workload, the Node Plugin first requires that the Controller Plugin has successfully published a volume at a directory that it can access. In practice, this usually means that the Controller Plugin has created the volume and attached it to a node. Now that the volume is attached, it’s time for the Node Plugin to do its job. At this point, the Node Plugin can access the volume at its device path to create a filesystem and mount it to a directory. Once it’s mounted, the volume is considered to be published and it is ready for a containerized process to use. This ends the CSI mounting workflow.

Continuing the AWS example, when the Controller Plugin publishes a volume, it calls ec2:CreateVolume followed by ec2:AttachVolume. These two API calls allocate the underlying storage by creating an EBS volume and attaching it to a particular instance. Once the volume is attached to the EC2 instance, the Node Plugin is free to format it and create a mount point on its host’s filesystem.

Here is an annotated version of the above volume lifecycle diagram, this time with the AWS calls included in the flow chart.

   CreateVolume +------------+ DeleteVolume
 +------------->|  CREATED   +--------------+
 |              +---+----^---+              |
 |       Controller |    | Controller       v
+++         Publish |    | Unpublish       +++
|X|          Volume |    | Volume          | |
+-+                 |    |                 +-+
                    |    |
  |    | 
                    |    |
  |    | 
                    |    |
                +---v----+---+
                | NODE_READY |
                +---+----^---+
               Node |    | Node
            Publish |    | Unpublish
             Volume |    | Volume
                +---v----+---+
                | PUBLISHED  |
                +------------+

If a Controller wants to delete a volume, it must first wait for the Node Plugin to safely unmount the volume to preserve data and system integrity. Otherwise, if a volume is forcibly detached from a node before unmounting it, we could experience bad things like data corruption. Once the volume is safely unpublished (unmounted) by the Node Plugin, the Controller Plugin would then call ec2:DetachVolume to detach it from the node and finally ec2:DeleteVolume to delete it, assuming that the you don’t want to reuse the volume elsewhere.

What makes the CSI so powerful is that it does not prescribe how to publish a volume. As long as your driver correctly implements the required API methods defined in the CSI spec, it will be compatible with the CSI and by extension, be usable in COs like Kubernetes and Nomad.

Running CSI Drivers in Kubernetes

What I haven’t entirely make clear yet is why the Controller and Node Plugins are plugins themselves! How does the Container Orchestrator call them, and where do they plug into?

Well, the answer depends on which Container Orchestrator you are using. Since I’m most familiar with Kubernetes, I’ll be using it to demonstrate how a CSI driver interacts with a CO.

Deployment Model

Since the Node Plugin, responsible for low-level volume operations, must be running on every node in your data plane, it is typically installed using a DaemonSet. If you have heterogeneous nodes and only want to deploy the plugin to a subset of them, you can use node selectors, affinities, or anti-affinities to control which nodes receive a Node Plugin Pod. Since the Node Plugin requires root access to modify host volumes and mounts, these Pods will be running in privileged mode. In this mode, the Node Plugin can escape its container’s security context to access the underlying node’s filesystem when performing mounting and provisioning operations. Without these elevated permissions, the Node Plugin could only operate inside of its own containerized namespace without the system-level access that it requires to provision volumes on the node.

The Controller Plugin is usually run in a Deployment because it deals with higher-level primitives like volumes and snapshots, which don’t require filesystem access to every single node in the cluster. Again, lets think about the AWS example I used earlier. If the Controller Plugin is just making AWS API calls to manage volumes and snapshots, why would it need access to a node’s root filesystem? Most Controller Plugins are stateless and highly-available, both of which lend themselves to the Deployment model. The Controller also does not need to be run in a privileged context.

Event-Driven Sidecar Pattern

Now that we know how CSI plugins are deployed in a typical cluster, it’s time to focus on how Kubernetes calls each plugin to perform CSI-related operations. A series of sidecar containers, that are registered with the Kubernetes API server to react to different events across the cluster, are deployed alongside each Controller and Node Plugin. In a way, this is similar to the typical Kubernetes controller pattern, where controllers react to changes in cluster state and attempt to reconcile the current cluster state with the desired one.

There are currently 6 different sidecars that work alongside each CSI driver to perform specific volume-related operations. Each sidecar registers itself with the Kubernetes API server and watches for changes in a specific resource type. Once the sidecar has detected a change that it must act upon, it calls the relevant plugin with one or more API calls from the CSI specification to perform the desired operations.

Controller Plugin Sidecars

Here is a table of the sidecars that run alongside a Controller Plugin:

Sidecar NameK8s Resources WatchedCSI API Endpoints Called
external-provisionerPersistentVolumeClaimCreateVolume, DeleteVolume
external-attacherVolumeAttachmentController(Un)PublishVolume
external-snapshotterVolumeSnapshot (Content)CreateSnapshot, DeleteSnapshot
external-resizerPersistentVolumeClaimControllerExpandVolume

How do these sidecars work together? Let’s use an example of a StatefulSet to demonstrate. In this example, we’re dynamically provisioning our PersistentVolumes (PVs) instead of mapping PersistentVolumeClaims (PVCs) to existing PVs. We start at the creation of a new StatefulSet with a VolumeClaimTemplate.

---
apiVersion: apps/v1
kind: StatefulSet
spec:
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
         storage: 1Gi

Creating this StatefulSet will trigger the creation of a new PVC based on the above template. Once the PVC has been created, the Kubernetes API will notify the external-provisioner sidecar that this new resource was created. The external-provisioner will then send a CreateVolume message to its neighbor Controller Plugin over gRPC. From here, the CSI driver’s Controller Plugin takes over by processing the incoming gRPC message and will create a new volume based on its custom logic. In the AWS EBS driver, this would be an ec2:CreateVolume call.

At this point, the control flow moves to the built-in PersistentVolume controller, which will create a matching PV and bind it to the PVC. This allows the StatefulSet’s underlying Pod to be scheduled and assigned to a Node.

Here, the external-attacher sidecar takes over. It will be notified of the new PV and call the Controller Plugin’s ControllerPublishVolume endpoint, mounting the volume to the StatefulSet’s assigned node. This would be the equivalent to ec2:AttachVolume in AWS.

At this point, we have an EBS volume that is mounted to an EC2 instance, all based on the creation of a StatefulSet, PersistentVolumeClaim, and the work of the AWS EBS CSI Controller Plugin.

Node Plugin Sidecars

There is only one unique sidecar that is deployed alongside the Node Plugin; the node-driver-registrar. This sidecar, running as part of a DaemonSet, registers the Node Plugin with a Node’s kubelet. During the registration process, the Node Plugin will inform the kubelet that it is able to mount volumes using the CSI driver that it is part of. The kubelet itself will then wait until a Pod is scheduled to its corresponding Node, at which point it is then responsible for making the relevant CSI calls ( PublishVolume ) to the Node Plugin over gRPC.

Common Sidecars

There is also a livenessprobe sidecar that runs in both the Container and Node Plugin Pods that monitors the health of the CSI driver and reports back to the Kubernetes Liveness Probe mechanism.

Communication over Sockets

How do these sidecars communicate with the Controller and Node Plugins? Over gRPC through a shared socket! So each sidecar and plugin contains a volume mount pointing to a single unix socket.

CSI Controller Deployment

This diagram highlights the pluggable nature of CSI Drivers. To replace one driver with another, all you have to do is simply swap the CSI Driver container with another and ensure that it’s listening to the unix socket that the sidecars are sending gRPC messages to. Becase all drivers advertise their own different capabilities and communicate over the shared CSI API contract, it’s literally a plug-and-play solution.

Conclusion

In this article, I only covered the high-level concepts of the Container Storage Interface spec and implementation in Kubernetes. While hopefully it has provided a clearer understanding of what happens once you install a CSI driver, writing one requires significant low-level knowledge of both your nodes’ operating system(s) and the underlying storage mechanism that your driver is implementing. Luckily, CSI drivers exist for a variety of cloud providers and distributed storage solutions, so it’s likely that you can find a CSI driver that already fulfills your requirements. But it always helps to know what’s happening under the hood in case your particular driver is misbehaving.

If this article interests you and you want to learn more about the topic, please let me know! I’m always happy to answer questions about CSI Drivers, Kubernetes Operators, and a myriad of other DevOps-related topics.

The post How the CSI (Container Storage Interface) Works appeared first on simplyblock.

]]>
CSI Controller Deployment
Kubernetes CSI: Container Attached Storage https://www.simplyblock.io/blog/kubernetes-csi-container-attached-storage-and-container-storage-interface/ Wed, 27 Mar 2024 12:13:27 +0000 https://www.simplyblock.io/?p=306 Containerized services must be stateless, a doctrine that was widely used in the early days of containerization, which came hand-in-hand with microservices. While it makes elasticity easy, these days, we containerize many types of services, such as databases, which cannot be stateless—at least, without losing their meaning. This is where the Kubernetes container storage interface […]

The post Kubernetes CSI: Container Attached Storage appeared first on simplyblock.

]]>
Containerized services must be stateless, a doctrine that was widely used in the early days of containerization, which came hand-in-hand with microservices. While it makes elasticity easy, these days, we containerize many types of services, such as databases, which cannot be stateless—at least, without losing their meaning. This is where the Kubernetes container storage interface (CSI) comes in.

Docker, initially released in 2013, brought containerized applications to the vast majority of users (outside of the Solaris and BSD world), making them a commodity to the masses. Kubernetes, however, eased the process of orchestrating complex container-based systems. Both systems enable data storage options, ephemeral (temporary) or persistent. Let’s dive into concepts of container-attached storage and Kubernetes CSI.

What is Container Attached Storage (CAS)?

When containerized services need disk storage, whether ephemeral or persistent, container-attached storage (or CAS) provides the requested “virtual disk” to the container.

High level architecture diagram for container attached storage (CSI) with Kubernetes

The CAS resources are managed alongside other container resources and are directly attached to the container’s own lifecycle. That means that storage resources are automatically provisioned and potentially de-provisioned. To achieve this functionality, the management of container-attached storage resources isn’t provided by the host operating system but directly integrated into the container runtime environment, hence systems such as Kubernetes, Docker, and others.

Since the storage resource is attached to the container, it isn’t used by the host operating system or other containers. Detaching storage and compute resources provides one of the building blocks of loosely coupled services, which small and independent development teams can easily manage.

From my perspective, five main principles are important to CAS:

  1. Native: Storage resources are a first-class citizen of containerized environments. Therefore, the overall container runtime environment seamlessly integrates with and fully manages it.
  2. Dynamic: Storage resources are (normally) coupled to their container’s lifecycle. This allows for on-demand provisioning of storage volumes whose size and performance profile are tailored to the applications’ needs. The dynamic nature and automatic resource management prevent manual intervention of volumes and devices.
  3. Decoupled: Storage resources are decoupled from the underlying infrastructure, meaning the container doesn’t know (and care) where the provided storage comes from. That makes it easy to provide different storage options, like high performance or highly resilient storage, to different containers. For super-high performance but ephemeral storage, even RAM disks would be an option.
  4. Efficient: By eliminating the need for traditional storage, e.g., local storage, it is easy to optimize resource utilization using special storage clusters, thin provisioning, and over-commitment. It also makes it easy to provide multi-regional backups and enables immediate re-attachment in case the container needs to be rescheduled on another cluster node.
  5. Agnostic: The storage provider can be easily exchanged due to the decoupling of storage resources and container runtime. This prevents vendor lock-in or provides the option to utilize multiple different storage options, depending on the needs of specific applications. A database running in a container will have very different storage requirements from a normal REST API service.

Given the five features above, we have the chance to provide each and every container with the exact storage option necessary. Some may need only ephemeral storage. Hence, temporary storage can be discarded when the container itself stops, while others need persistent storage, which either lives until the container is deleted or, in specific cases, will even survive this to be reattached to a new container (for example, in the case of container migration).

What is a Container Storage Interface (Kubernetes CSI)?

Like everything in Kubernetes, the container-attached storage functionality is provided by a set of microservices orchestrated by Kubernetes itself, making it modular by design. That said, services internally provided and extended by vendors make up the container storage interface (or Kubernetes CSI). Together, they create a well-defined interface for any type of storage option to be plugged into Kubernetes.

The container storage interface defines a standard set of functionalities, some mandatory and some optional, to be implemented by the Kubernetes CSI drivers. Those drivers are commonly provided by the different vendors of storage systems.

Hence, the CSI drivers build bridges between Kubernetes and the actual storage implementation, which can be physical, software-defined, or fully virtual (like an implementation sending all data “stored” to /dev/null). On the other hand, it allows vendors to implement their storage solution as efficiently as possible, providing a minimal set of operations towards provisioning and general management. That way, vendors can choose how to implement storage, with the two main categories being hyperconverged (compute and storage sharing the same cluster nodes), disaggregated, meaning that the actual storage environment is fully separated from the Kubernetes workloads using them, bringing a clear separation of storage and compute resources.

Internal architecture of the container storage interface (CSI) from the storage backend over the CSI driver to the client pod utilizing the persistent volume.

Just like Kubernetes, the container storage interface is developed as a collaborative effort inside the Cloud Native Computing Foundation (better known as CNCF) by members from all sides of the industry, vendors, and users.

The main goal of Kubernetes CSI is to deliver on the premise of being fully vendor-neutral. In addition, it enables parallel deployment of multiple different drivers, offering storage classes for each of them. This provides us, as users, with the ability to choose the best storage technology for each container, even in the same Kubernetes cluster.

As mentioned, the Kubernetes CSI driver interface provides a standard storage (or volume) operation set. These include creation or provisioning, resizing, snapshotting, cloning, and volume deletion. The operations can either be performed directly or through Kubernetes’ container resource descriptors (CRD), integrating into the consistent approach to managing container resources.

Editor’s note: We also have a deep dive into how the Kubernetes Container Storage Interface works.

Kubernetes and Stateful Workloads

For many people, containerized workloads should be fully stateless; in the past, it was the most commonly used mantra. With the rise of orchestration platforms, such as Kubernetes, it also became more typical to deploy more stateful workloads, often due to the simplified deployment. Orchestrators offer features like automatic elasticity, restarting containers after crashes, automatic migration of containers for rolling upgrades, as well as many more typical operational procedures. Having them built-in into an orchestration platform takes a lot of the burden, hence people started to deploy more and more databases.

Databases aren’t the only stateful workloads, though. Other applications and services may also require storage of some kind of state, sometimes as a local cache, using ephemeral storage, and sometimes in a more persistent fashion, as databases.

Benjamin Wootton (then working for Contino, now at Ensemble) wrote a great blog post about the difference between stateless and stateful containers and why the latter is needed. You should read it, but only after this one.

Your Kubernetes Storage with Simplyblock

The container storage interface in Kubernetes serves as the bridge between Kubernetes and external storage systems. It provides a standardized and modular approach to provisioning and managing container-attached storage resources.

By decoupling storage functionality from the Kubernetes core, Kubernetes CSI promotes interoperability, flexibility, and extensibility. This enables organizations to seamlessly leverage a wide range of storage solutions in their Kubernetes environments, tailoring the storage to the needs of each container individually.

With the evolving ecosystem and changing Kubernetes workloads towards databases and other IO-intensive or low-latency applications, storage becomes increasingly important. Simplyblock is your distributed, disaggregated, high-performance, predictable, low-latency, and resilient storage solution. Simplyblock is tightly integrated with Kubernetes through the CSI driver and available as a StorageClass. It enables storage virtualization with overcommitment, thin provisioning, NVMe over TCP access, copy-on-write snapshots, and many more features.

If you want to learn more about simplyblock, read “Why simplyblock?” If you want to get started, we believe in simple pricing.

The post Kubernetes CSI: Container Attached Storage appeared first on simplyblock.

]]>
container-attach-storage-high-level-architecture-diagram internal-architecture-csi-driver-container-pod