Cloud Archives | simplyblock https://www.simplyblock.io/blog/tags/cloud/ NVMe-First Kubernetes Storage Platform Wed, 05 Feb 2025 13:31:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.simplyblock.io/wp-content/media/cropped-icon-rgb-simplyblock-32x32.png Cloud Archives | simplyblock https://www.simplyblock.io/blog/tags/cloud/ 32 32 AWS Storage Optimization: Avoid EBS Over-provisioning https://www.simplyblock.io/blog/avoid-storage-over-provisioning/ Thu, 10 Oct 2024 07:36:59 +0000 https://www.simplyblock.io/?p=2684 “Cloud is expensive” is an often repeated phrase among IT professionals. What makes the cloud so expensive, though? One element that significantly drives cloud costs is storage over-provisioning and lack of storage optimization. Over-provisioning refers to the eager allocation of more resources than required by a specific workload at the time of allocation. When we […]

The post AWS Storage Optimization: Avoid EBS Over-provisioning appeared first on simplyblock.

]]>
“Cloud is expensive” is an often repeated phrase among IT professionals. What makes the cloud so expensive, though? One element that significantly drives cloud costs is storage over-provisioning and lack of storage optimization. Over-provisioning refers to the eager allocation of more resources than required by a specific workload at the time of allocation.

When we hear about hoarding goods, we often think of so-called preppers preparing for some type of serious event. Many people would laugh about that kind of behavior. However, it is commonplace when we are talking about cloud environments.

In the past, most workloads used their own servers, often barely utilizing any of the machines. That’s why we invented virtualization techniques, first with virtual machines and later with containers. We didn’t like the idea of wasting resources and money.

That didn’t stop when workloads were moved to the cloud, or did it?

What is Over-Provisioning?

As briefly mentioned above, over-provisioning refers to allocating more resources than are needed for a given workload or application. That means we actively request more resources than we need, and we know it. Over-provisioning typically occurs across various infrastructure components: CPU, memory, and storage. Let’s look at some basic examples to understand what that means:

  1. CPU Over-Provisioning: Imagine running a web server on a virtual machine instance (e.g., Amazon EC2) with 16 vCPUs. At the same time, your application only requires four vCPUs for the current load and number of customers. You expect to increase the number of customers in the next year or so. Until then, the excess computing power sits idle, wasting resources and money.
  2. Memory Over-Provisioning: Consider a database server provisioned with 64GB of RAM when the database service commonly only uses 16GB, except during peak loads. The unused memory is essentially paid for but unutilized most of the time.
  3. Storage Over-Provisioning: Consider a Kubernetes cluster with ten instances of the same stateful service (like a database), each requesting a block storage volume (e.g., Amazon EBS) of 100 GB but will only slowly fill it up over the course of a year. In this case, each container uses about 20 GB as of now, meaning we over-provisioned 800 GB, and we have to pay for it.

Why is EBS Over-Provisioning an Issue?

EBS Over-provisioning isn’t an issue by itself, and we lived happily ever after (almost) with it for decades. While over-provisioning seems to be the safe bet to ensure performance and plannability, it comes with a set of drawbacks.

  1. High initial cost: When you overprovision, you pay for resources you don’t use from day one. This can significantly inflate your cloud bill, especially at scale.
  2. Resource waste: Unused resources aren’t just a financial burden. They also waste valuable computing power that could be better allocated elsewhere. Not to mention the environmental effects of over-provisioning, think CO2 footprint.
  3. Hard to estimate upfront: Predicting exact resource needs is challenging, especially for new applications or those with variable workloads. This uncertainty often leads us to very conservative (and excessive) provisioning decisions.
  4. Limitations when resizing: While cloud providers like AWS allow resource resizing, limitations exist. Amazon EBS volumes can only be modified every 6 hours, making it difficult to adjust to changing needs quickly.

On top of those issues, which are all financial impact related, over-provisioning can also directly or indirectly contribute to topics such as:

  • Reduced budget for innovation
  • Complex and hard-to-manage infrastructures
  • Potential compliance issues in regulated industries
  • Decreased infrastructure efficiency

The Solution is Pay-By-Use

Pay-by-use refers to the concept that customers are billed only for what they actually use. That said, using our earlier example of a 100 GB Amazon EBS volume where only 20 GB is used, we would only be charged for those 20 GB. As a customer, I’d love the pay-by-use option since it makes it easy and relieves me of the burden of the initial estimate.

So why isn’t everyone just offering pay-by-use models?

The Complexity of Pay-By-Use

Many organizations dream of an actual pay-by-use model, where they only pay for the exact resources consumed. This improves the financial impact, optimizes the overall resource utilization, and brings environmental benefits. However, implementing this is challenging for several reasons:

  1. Technical Complexity: Building a system that can accurately measure and bill for precise resource usage in real time is technically complex.
  2. Performance Concerns: Constant scaling and de-scaling to match exact usage can potentially impact performance and introduce latency.
  3. Unpredictable Costs: While pay-by-use can save money, it can also make costs less predictable, making budgeting challenging.
  4. Legacy Systems: Many existing applications aren’t designed to work with dynamically allocated resources.
  5. Cloud Provider Greed: While this is probably exaggerated, there is still some truth. Cloud providers overcommit CPU, RAM, and network bandwidth, which is why they offer both machine types with dedicated resources and ones without (where they tend to over-provision resources, and you might encounter the “noisy neighbor” problem). On the storage side, they thinly provision your storage out of a large, ever-growing storage pool.

Over-Provisioning in AWS

Like most cloud providers, AWS has several components where over-provisioning is typical. The most obvious one is resources around Amazon EC2. However, since many other services are built upon EC2 machines (like Kubernetes clusters), this is the most common entry point to look into optimization.

Amazon EC2 (CPU and Memory)

When looking at Amazon EC2 instances to save some hard-earned money, AWS offers some tools by itself:

  • Use AWS CloudWatch to monitor CPU and memory utilization.
  • Implement auto-scaling groups to adjust instance counts dynamically based on demand.
  • Consider using EC2 Auto Scaling with predictive scaling to anticipate future needs.

In addition, some external tools, such as AutoSpotting or Cast.ai, enable you to find over-provisioned VMs and adjust them accordingly automatically or exchange them with so-called spot instances. Spot instances are VM instances that are way cheaper but can be taken away from you with only a few seconds’ notice. The idea is that AWS offers these instances at a reduced rate when they can’t be sold for their regular price. That said, if the capacity is required, they’ll take them away from you—still a great way to save some money.

Last but not least, companies like DoIT work as resellers for hyperscalers like AWS. They have custom rates and offer additional features like bursting beyond your typical requirements. This is a great way to get cheaper VMs and extra services. It’s worth a look.

Amazon EBS Storage Over-Provisioning

One of the most common causes of over-provisioning happens with block storage volumes, such as Amazon EBS. With EBS, the over-provisioning is normally driven by:

  • Pre-allocated Capacity: EBS volumes are provisioned with a fixed size, and you pay for the entire allocated space regardless of usage.
  • Modification Limitations: EBS volumes can only be modified every 6 hours, making rapid adjustments difficult.
  • Performance Considerations: A common belief is that larger volumes perform better, so people feel incentivized to over-provision.

One interesting note, though, is that while customers have to pay for the total allocated size, AWS likely uses technologies such as thin provisioning internally, allowing it to oversell its actual physical storage. Imagine this overselling margin would be on your end and not the hyperscaler.

How Simplyblock Can Help with EBS Storage Over-Provisioning

Simplyblock offers an innovative storage optimization platform to address storage over-provisioning challenges. By providing you with a comprehensive set of technologies, simplyblock enables several features that significantly optimize storage usage and costs.

Thin Provisioning

Thin provisioning is a technique where a storage entity of any capacity will be created without pre-allocating the requested capacity. A thinly provisioned volume will only require as much physical storage as the data consumes at any point in time. This enables overcommitting the underlying storage, like ten volumes with a provisioned capacity of 1 TB each. Still, only 100GB being used will require around 1 TB at this time, meaning you can save around 9 TB of storage that is not paid for unless used.

Simplyblock’s thin provisioning technology allows you to create logical volumes of any size without pre-allocating the total capacity. You only consume (and pay for) the actual space your data uses. This eliminates the need to over-provision “just in case” and allows for more efficient use of your storage resources. When your actual storage requirements increase, simplyblock automatically allocates additional underlying storage to keep up with your demands.

Two thinly provisioned devices and the underlying physical storage

Copy-on-Write, Snapshots, and Instant Clones

Simplyblock’s storage technology is a fully copy-on-write-enabled system. Copy-on-write is a technique also known as shadowing. Instead of copying data right away when multiple copies are created, copy-on-write will only create a second instance when the data is actually changed. This means the old version is still around since other copies still refer to it, while only one specific copy refers to the changed data. Copy-on-write enables the instant creation of volume snapshots and clones without duplicating data. This is particularly useful for development and testing environments, where multiple copies of large datasets are often needed. Instead of provisioning full copies of production data, you can create instant, space-efficient clones specifically attractive for databases, AI / ML workloads, or analytics data.

Copy-on-write technique explained with two files referring to shared, similar parts and modified, unshared parts

Transparent Tiering

With most data sets, parts of the data are typically assumed to be “cold,” meaning that the data is very infrequently used, if ever. This is true for any data that needs to be kept available for regulatory reasons or historical manufacturing data (such as process information for car part manufacturing). This data can be moved to slower but much less expensive storage options. Simplyblock automatically moves infrequently accessed data to cheaper storage tiers such as object storage (e.g., Amazon S3 or MinIO) and non-NVMe SSD or HDD pools while keeping hot data on high-performance storage. This tiering is completely transparent to your applications, database, or other workload and helps optimize costs without sacrificing performance. With tiering integrated into the storage layer, application and system developers can focus on business logic rather than storage requirements.

Automatic tiering, transparently moving cold data parts to slower but cheaper storage

Storage Pooling

Storage pooling is a technique in which multiple storage devices or services are used in conjunction. It enables technologies like thin provisioning and data tiering, which were already mentioned above.

By pooling multiple cloud block storage volumes (e.g., Amazon EBS volumes), simplyblock can provide better performance and more flexible scaling. This pooling allows for more granular storage growth, preventing the provision of large EBS volumes upfront.

Additionally, simplyblock can leverage directly attached fast SSD storage (NVMe), also called local instance storage, and make it part of the storage pool or use it as an even faster workload-local data cache.

NVMe over Fabrics

NVMe over Fabrics is an industry-standard for remotely attaching block devices to clients. It can be assumed to be the successor of iSCSI and enables the full feature set and performance of NVMe-based SSD storage. Simplyblock uses NVMe over Fabrics (specifically the NVMe/TCP version) to provide high-performance, low-latency access to storage.

This enables the consolidation of multiple storage locations into a centralized one, enabling even greater savings on storage capacity and compute power.

Pay-By-Use Model Enablement

As stated above, pay-by-use models are a real business advantage, specifically for storage. Implementing a pay-by-use model in the cloud requires taking charge of how storage works. This is complex and requires a lot of engineering effort. This is where simplyblock helps bring a competitive advantage to your doorstep.

With its underlying technology and features such as thin provisioning, simplyblock makes it easier for managed service providers to implement a true pay-by-use model for their customers, giving you the competitive advantage at no extra cost or development effort, all fully transparent to your database or application workload.

AWS Storage Optimization with Simplyblock

By addressing the core issues of EBS over-provisioning, simplyblock helps reduce costs and improves overall storage efficiency and flexibility. For businesses struggling with storage over-provisioning in AWS, simplyblock offers a compelling solution to optimize their infrastructure and better align costs with actual usage.

In conclusion, while over-provisioning remains a significant challenge in AWS environments, particularly with storage, simplyblock paves the way for more efficient, cost-effective cloud storage optimization management. By combining advanced technologies with a deep understanding of cloud storage dynamics, simplyblock enables businesses to achieve the elusive goal of paying only for what they use without sacrificing performance or flexibility.

Take your competitive advantage and get started with simplyblock today.

The post AWS Storage Optimization: Avoid EBS Over-provisioning appeared first on simplyblock.

]]>
Two thinly provisioned devices and the underlying physical storage Copy-on-write technique explained with two files referring to shared, similar parts and modified, unshared parts Automatic tiering, transparently moving cold data parts to slower but cheaper storage
Easy Developer Namespaces with Multi-tenant Kubernetes with Alessandro Vozza from Kubespaces https://www.simplyblock.io/blog/easy-developer-namespaces-with-multi-tenant-kubernetes-with-alessandro-vozza-from-kubespaces-video/ Fri, 14 Jun 2024 12:07:20 +0000 https://www.simplyblock.io/?p=253 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment of podcast, we’re joined by Alessandro Vozza ( Twitter/X , Github ) , a prominent figure in the Kubernetes and cloud-native community , who […]

The post Easy Developer Namespaces with Multi-tenant Kubernetes with Alessandro Vozza from Kubespaces appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment of podcast, we’re joined by Alessandro Vozza ( Twitter/X , Github ) , a prominent figure in the Kubernetes and cloud-native community , who talks about his new project, Kubespaces, which aims to simplify Kubernetes deployment by offering a namespace-as-a-service. He highlights the importance of maintaining the full feature set of Kubernetes while ensuring security and isolation for multi-tenant environments. Alessandro’s vision includes leveraging the Kubernetes API to create a seamless, cloud-agnostic deployment experience, ultimately aiming to fulfill the promises of platform engineering and serverless computing. He also discusses the future trends in Kubernetes and the significance of environmental sustainability in technology.

EP16: Easy Developer Namespaces with Multi-tenant Kubernetes with Alessandro Vozza from Kubespaces

Chris Engelbert: Hello, everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast. Today, I have another incredible guest. I know I say that every time, but he’s really incredible. He’s been around in the Kubernetes space for quite a while. And I think, Alessandro, the best way is just to introduce yourself. Who are you? What have you done in the past, and what are you doing right now?

Alessandro Vozza: Thank you for having me. Well, I’m Alessandro, yes, indeed. I’ve been around for some time in the cloud-native community. I’m Italian, from the south of Italy, and I moved to Amsterdam, where I live currently, about 20 years ago, to get my PhD in chemistry. And then after I finished my PhD, that’s my career. So I went through different phases, always around open source, of course. I’ve been an advocate for open source, and a user of open source since the beginning, since I could lay my hands on a keyboard.

That led me to various places, of course, and various projects. So I started running the DevOps meetup in Amsterdam back in the day, 10, 11 years ago. Then from there, I moved to the OpenStack project and running the OpenStack community. But when I discovered Kubernetes, and what would become the Cloud Native Computing Foundation, I started running the local meetup. And that was kind of a turning point for me. I really embraced the community and embraced the project and started working on the things. So basically what I do is organize the meetup and organize the KCDs, the Kubernetes Community Days in Amsterdam, in Utrecht, around the country. That kind of led me through a natural process to be a CNCF Ambassador, which are people that represent or are so enthusiastic about the way the Cloud Native Computing Foundation works and the community, that are naturally elected to be the face or the ambassadors for the project, for the mission.

At this moment, I still do that. It’s my honor and pleasure to serve the community, to create, to run monthly meetups and KCDs and help other communities thrive as well. So the lessons learned in the Netherlands, in the meetups and in the conferences, we try to spread them as much as possible. We are always available for other communities to help them thrive as well. So that’s been me in a nutshell. So all about community. I always say I’m an average programmer, I’m an average engineer, but where I really shine is to organize these events and to get the people together. I get a kick out of a successful event where people form connections and grow together. So that’s what drives me in my very core.

Chris Engelbert: I like how you put this. You really shine in bringing engagement to the community, helping people to shine themselves, to grow themselves. I think that is a big part of being a developer advocate or in the developer relations space in general. You love this sharing of information, helping other people to get the most out of it.

Alessandro Vozza: Actually, I used to be, or I still do play the bass, electric bass and double bass. And the bass player stays in the back next to the drummer and he creates the conditions so the other members of the band shine. So the guitar player usually stays in front, the bass player is the guy that stays back and is happy to create the foundations and cover the music to really shine. And that’s maybe my nature. So maybe it reflects from the fact that I always love playing the bass and being that guy in a band.

Chris Engelbert: I love that. That’s a great analogy. I never thought about that, but that is just brilliant. And I actually did the same thing in the past, so there may be some truth to that. So we met a few weeks ago in Amsterdam, actually at AWS Summit Amsterdam.

And I invited you because I thought you were still with the previous company, but you’re doing something new right now. So before that, you were with Solo.io , an API gateway, networking, whatever kind of thing. But you’re doing your own thing. So tell us about it.

Alessandro Vozza: Yeah. So it was a great year doing DevRel and so much fun going and speaking about service mesh, which is something that I really believe it’s going to, it’s something that everybody needs, but I know it’s a controversial, but it’s something that I really, you got to believe in it. You know, when you are a developer advocate, when you represent a company or community, the passion is important. You cannot have passion for something you don’t believe in, for something that you don’t completely embrace. And that was great. And we had so much fun for about a year or a bit more. But then I decided that I’m too young to settle, as always, like I’m only 48, come on, I have a good 10 years of engineering work to do. So I decided that I wanted to work on something else, on something mine, more, more mine, more an idea that I had, and I want to see it develop.

Filling a gap in the market and a real need for developers to have a flexible environment, environments to deploy their applications. So fulfilling the promises of platform engineering as a self-service platform to deploy applications. So the idea goes around the namespace. What is a namespace? Of course, it’s what the unit of deployment in Kubernetes really, it’s this magical place where developers can be free and can deploy their application without the control within the guard rails of whatever the system means, the cluster administrator sets.

But developers really love freedom. So developers don’t want to have to interact even with the sysops or sysadmins. In fact, developers love Heroku. So Heroku, I think, is the hallmark of developer experience where you just can deploy whatever you want, all your code, all your applications in a place and it’s automatically exposed and you can manage by yourself everything about your application.

I want to reproduce that. I want to get inspired by that particular developer experience. But because I love Kubernetes, of course, and because I really believe that the Kubernetes APIs are the cornerstone, the golden standards of cloud-native application deployment. So I want to offer the same experience but through the Kubernetes API. So how you do that, and that’s, of course, like this evolving product, me and a bunch of people are still working on, define exactly what does it mean and how it’s going to work. But the idea is that we offer namespace-as-a-service. What really matters to developers is not the clusters, is not the VMs or the networks or all the necessary evil that you need to run namespaces. But what really matters is the namespace, is a place where they can deploy their application. So what if we could offer the best of both worlds, kind of like the promises of serverless computing, right? So you are unburdened by infrastructure. Of course, there is infrastructure somewhere, the cloud is just somebody else’s computer, right? So it’s not magic, but it feels like magic because of the clever arrangement of servers in a way that you don’t see them, but they are still there.

So imagine a clusterless Kubernetes. The experience of Kubernetes, the API really, so all the APIs that you learn to love and embrace without the burden of infrastructure. That’s the core idea.

Chris Engelbert: So that means it’s slightly different from those app platforms like Fargate or what’s the Azure and GCP ones, Cloud Run and whatever. So it’s slightly different, right? Because you’re still having everything Kubernetes offers you. You still have your CRDs or your resource definitions, but you don’t have to manage Kubernetes on its own because it’s basically a hosted platform. Is that correct?

Alessandro Vozza: Yeah. So those platforms, of course, they are meant to run single individual application pods, but they don’t feel like Kubernetes. I don’t understand. For me, because I love it so much, I think developers love to learn also new things. So developers will love to have a Kubernetes cluster where they can do what they like, but without the burden of managing it. But this CloudRun and ACI and Fargate, they are great tools, of course, and you can use them to put together some infrastructure, but they’re still limiting in what you can deploy. So you can deploy this single container, but it’s not a full-fledged Kubernetes cluster. And I think it’s still tripling in a way that you don’t have the full API at your disposal, but you have to go through this extra API layer. It’s a bespoke API, so you got to learn Cloud Run, you got to learn ACI, you got to learn Fargate, but they are not compatible with each other. They are very cloud specific, but a Kubernetes API is cloud agnostic, and that’s what I want to build.

What we seek to build is to have a single place where you can deploy in every cloud, in every region, in some multi-region, multi-cloud, but through the same API layer, which is the pure and simple Kubernetes API.

Chris Engelbert: I can see there’s two groups of people, the ones that say, just hide all the complexity from Kubernetes. And you’re kind of on the other side, I wouldn’t say going all the way, like you want the complexity, but you want the feature set, the possibilities that Kubernetes still offers you without the complexity of operating it. That’s my feeling.

Alessandro Vozza: Yeah, the complexity lies in the operation, in the upgrades, the security, to properly secure a Kubernetes cluster, it takes a PhD almost, so there’s a whole sort of ecosystem dedicated to secure a cluster. But in Kubespaces, we can take care of it, we can make sure that the clusters are secure and compliant, while still offering the freedom to the developers to deploy what they need and they like. I think we underestimate the developers, so they love to tinker with the platform, so they love freedom, they don’t want the burden, even to interact with the operation team.

And so the very proposal here is that you don’t need an operation team, you don’t need a platform engineering team, it’s all part of the platform that we offer. And you don’t even need an account in Azure or AWS, you can select which cloud and which region to deploy to completely seamlessly and without limits.

Chris Engelbert: Okay, so that means you can select, okay, I need a Kubernetes cluster namespace, whatever you want to call it, in Azure, in Frankfurt or in Western Europe, whatever they call it.

Alessandro Vozza: Yeah. Okay, so yeah, it is still a thing, so people don’t want to be in clouds that don’t trust, so if you don’t want to be in Azure, you should not be forced to. So we offer several infrastructure pieces, clusters, even if the word cluster doesn’t even appear anywhere, because it’s by design, we don’t want people to think in terms of clusters, we want people to think in terms of namespaces and specifically tenants, which are just a collection of namespaces, right? So it’s a one namespace is not going to cut it, of course, you want to have multiple to assign to your teams, to group them in environments like that, prod or test, and then assign them to your team, to your teams, so they can deploy and they’re fun with their namespaces and tenants.

Chris Engelbert: Yeah, I think there’s one other thing which is also important when you select a cloud and stuff, you may have other applications or other services already in place, and you just want to make sure that you have the lowest latency, you don’t have to pay for throughput, and stuff like that. Something that I always find complicated with hosted database platforms, to be honest, because you have to have them in the same region somehow.

Alessandro Vozza: Yeah, that’s also a political reason, right? Or commercial reason that prevents you from that.

Chris Engelbert: Fair, fair. There’s supposed to be people that love Microsoft for everything.

Alessandro Vozza: I love Microsoft, of course, been there for seven years. I’m not a fanboy, maybe I am a little, but that’s all right. Everybody, that’s why the world is a beautiful place. Everybody is entitled to his or her opinion, and that’s all right.

Chris Engelbert: I think Microsoft did a great job with the cloud, and in general, a lot of the changes they did over the last couple of decades, like the last two decades, I think there are still the teams like the Office and the Windows team, which are probably very enterprise-y still, but all of the other ones. For me specifically, the Java team at Microsoft, they’re all doing a great job, and they seem to be much easier and much more community driven than the others.

Alessandro Vozza: I was so lucky because I was there, so I saw it with my own eyes, the unfolding of this war machine of Microsoft. There was this tension of beating Amazon at their own game. Seven years ago, we had this mission of really, really demonstrating that Microsoft was serious about open source, about cloud, and it paid off, and they definitely put Microsoft back on the map. I’m proud and very, very grateful to be here. You have been there, Microsoft joining the Linux Foundation, the Cloud Native Computing Foundation really being serious about Cloud Native, and now it works.

Chris Engelbert: I agree. The Post-Balmer era is definitely a different world for Microsoft. All right, let’s get back to Kubespaces, because looking at the time, we’re at 17. You said it’s, I think it’s a shared resource. You see the Kubernetes as a multi-tenant application, so how does isolation work between customers? Because I think that is probably a good question for a lot of security-concerned people.

Alessandro Vozza: Yeah, so of course, in the first incarnation would be a pure play SaaS where you have shared tenants. I mean, it’s an infrastructure share among customers. That’s by design the first iteration. There will be more, probably where we can offer dedicated clusters to specific customers. But in the beginning, it will be based on a mix of technologies between big cluster and Firecracker, which ensure better isolation of your workload. So it is indeed one piece of infrastructure where multiple customers will throw their application, but you won’t be able to see each other. Everybody gets his own API endpoint for Kubernetes API, so you will not be able. RBAC is great, and it works, of course, and it’s an arcane magic thing and it’s arcane knowledge. Of course, to properly do RBAC is quite difficult. So instead of risking to make a mistake in some cluster role or role, and then everybody can see everything, you better have isolation between tenants. And that comes with a popular project like big cluster, which has been already around for five years. So that’s some knowledge there already.

And even an other layer of isolation, things like Kata Container and Firecracker, they provide much better isolation at the container runtime level. So even if you escape from the container, from the jail of the container, you only can see very limited view of the world and you cannot see the rest of the infrastructure. So that’s the idea of isolating workloads between customers. You could find, of course, flaws in it, but we will take care of it and we will have all the monitoring in place to prevent it, it’s a learning experience. We want to prove to ourselves first and to customers that we can do this.

Chris Engelbert: Right. Okay. For the sake of time, a very, very… well, I think because you’re still building this thing out, it may be very interesting for you to talk about that. I think right now it’s most like a one person thing. So if you’re looking for somebody to help with that, now is your time to ask for people.

Alessandro Vozza: Yeah. If the ideas resonate and you want to build a product together, I do need backend engineers, front-end engineers, or just enthusiastic people that believe in the idea. It’s my first shot at building a product or building a startup. Of course, I’ve been building other businesses before, consulting and even a coworking space called Cloud Pirates. But now I want to take a shot at building a product and see how it goes. The idea is sound. There’s some real need in the market. So it’s just a matter of building it, build something that people want. So don’t start from your ideas, but just listen to what people tell you to build and see how it goes. So yeah, I’ll be very happy to talk about it and to accept other people’s ideas.

Chris Engelbert: Perfect. Last question, something I always have to ask people. What do you think will be the next big thing in Kubernetes? Is it the namespace-as-a-service or do you see anything else as well?

Alessandro Vozza: If I knew, of course, in the last KubeCon in Paris, of course, the trends are clear, this AI, this feeding into AI, but also helping AI thrive from Cloud Native. So this dual relationship with the Gen AI and the new trends in computing, which is very important. But of course, if you ask people, there will be WebAssembly on the horizon, not replacing containers, but definitely becoming a thing. So there are trends. And that’s great about this community and this technologies that it’s never boring. So there’s always something new to learn. And I’m personally trying to learn every day. And if it’s not WebAssembly, it’s something else, but trying to stay updated. This is fun. And challenges your convention, your knowledge every day. So this idea from Microsoft that I learned about growth mindset, what you should know now is never enough if you think ahead. And it’s a beautiful thing to see. So it’s something that keeps me every day.

Now I’m learning a lot of on-premise as well. These are also trying to move workloads back to the data centers. There are reasons for it. And one trend is actually one very important one. And I want to shout out to the people in the Netherlands also working on it is green computing or environmental sustainability of software and infrastructure. So within the CNCF, there is the Technical Advisory Group environmental sustainability, which we’re collaborating with. We are running the environmental sustainability week in October. So worldwide events all around getting the software we all love and care to run greener and leaner and less carbon intense. And this is not just our community, but it’s the whole planet involved. Or at least should be concerned for everybody concerned about the future of us. And I mean, I have a few kids, so I have five kids. So it’s something that concerns me a lot to leave a better place than I found it.

Chris Engelbert: I think that is a beautiful last statement, because we’re running out of time. But in case you haven’t seen the first episode of a podcast, that may be something for you because we actually talked to Rich Kenny from Interact and they work on data center sustainability, kind of doing the same thing on a hardware level. Really, really interesting stuff. Thank you very much. It was a pleasure having you. And for the audience, next week, same time, same place. I hope you’re listening again. Thank you.

Alessandro Vozza: Thank you so much for having me. You’re welcome.

The post Easy Developer Namespaces with Multi-tenant Kubernetes with Alessandro Vozza from Kubespaces appeared first on simplyblock.

]]>
EP16: Easy Developer Namespaces with Multi-tenant Kubernetes with Alessandro Vozza from Kubespaces
How Oracle transforms its operation into a cloud business with Gerald Venzl from Oracle https://www.simplyblock.io/blog/how-oracle-transforms-its-operation-into-a-cloud-business-with-gerald-venzl-from-oracle-video/ Fri, 17 May 2024 12:11:50 +0000 https://www.simplyblock.io/?p=270 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site. In this installment of podcast, we’re joined by Gerald Venzl ( Twitter/X , Personal Blog ), a Product Manager from Oracle Database , who talks about the shift […]

The post How Oracle transforms its operation into a cloud business with Gerald Venzl from Oracle appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site.

In this installment of podcast, we’re joined by Gerald Venzl ( Twitter/X , Personal Blog ), a Product Manager from Oracle Database , who talks about the shift of focus away from on-premise databases towards the cloud. It’s a big change for a company like Oracle, but a necessary one. Learn more about the challenges and why Oracle believes multi-cloud is the future.

EP12: How Oracle transforms its operation into a cloud business with Gerald Venzl from Oracle

Chris Engelbert: Welcome back to the next episode of simpleblock’s Cloud Commute podcast.Today I have a very special guest, like always. I mean, I never have non-special guests.But today he’s very special because he’s from a very different background.Gerald, welcome. And maybe you can introduce yourself. Who are you? And how did you happen to be here?

Gerald Venzl: Yeah, thank you very much, Chris. Well, how I really don’t know, but I’m Gerald, I’m a database product manager for Oracle Database, working for Oracle a bit over 12 years now in California, originally from Austria. And yeah, kind of had an interesting path that set me into database product management.Essentially, I was a developer who developed a lot of PL/SQL alongside other programming languages, building ERP systems with databases in the background, the Oracle database. And eventually that’s how I ended up in product management for Oracle. The ‘how I’m here’, I think you found me. We hada fun conversation about 5 years ago, as we know, when we met first at a conference, as it so often happens. And you reached out and I think today is all about talking about Cloud Native, databasesand everything else we can come up with.

Chris Engelbert: Exactly. Is it 5 years ago that we’ve seen last time or that we’ve seen at all?

Gerald Venzl: No, that we’ve met 5 years ago.

Chris Engelbert: Seriously?

Gerald Venzl: Yeah.

Chris Engelbert: Are you sure it wasn’t JavaOne somewhere way before that?

Gerald Venzl: Well, we probably crossed paths, right? But I think it was the conference there where we both had the speaker dinner and got to exchange some, I mean, more than just like, “Hello, I’m so-and-so.”

Chris Engelbert: All right, that’s fair. Well, you said you’re working for Oracle. I think Oracle doesn’t really need any introduction. Probably everyone listening in knows what Oracle is. But maybe you said you’re a product manager from the database department. So what is that like? I mean, it’s special or it’s different from the typical audience or from the typical guest that I have. So how is the life of a product manager?

Gerald Venzl: Yeah, so what I particularly like about Oracle product management, or especially in database, obviously different lines of business and then inside Oracle may operate differently.It’s a job with a lot of facets.So the typical kind of product management job, the way how it was described to me was, well, you gather customer requirements, you bring it back to development, then it gets implemented, and then you kind of do go-to-market campaigns. So basically, you’re responsible for the collateral, what’s the message to advocate these new features to customers, to the world, and then that’s not so true for Oracle. I think one of the things that really excites me in the database world, it’s like this goes back to the late 70s. I mean, other than Larry, not that many people are around from the era anymore. But Oracle back then did a lot of things that were either before its time or when there simply was no other choice or way of doing it, commensurable wisdom, I would say. So one of the nice things in Oracle is that actually the coming up with new features is really a nice collaboration between development and product management.

So development just as much has their own ideas of what we need to do or should be doing, like the PMs, and we really get together and discuss it out. And of course, sometimes, there’s features that you may or may not agree with personally or don’t see the need for. And often, actually, and much more so, you get quite amazed by what we’ve come up with. And we have a lot of really smart people in the work.And one thing that, yeah, not to go too much into a rabbit hole, but a couple of things that I really like; believe it or not, database development, it feels a lot like a startup. There’s no fixed hierarchies as such, ‘you can only do this. You must only do this or anything like that.’You can very openly approach the development leads, so even up to the SVP levels. And actually, just as we started now, one of those guys was like, “Hey, let’s talk while I’m driving into work.” I was like “sorry, I’m busy right now”. So you have that going. And then also, there’s a lot of the product management work that has alot of facets to it. So it’s not just ‘define the product’ or anything like that. That is obviously part of it, but also it’s evangelizing,as I’m doing right now. I speak to people on a thought leadership front for data management, if you like, or how to organize data and so forth.

And as I said before, one other thing that I really enjoy working in a team isthere’s actually quite a lot of really smart people in the org that go back to the 90s and some of them even to the 80s. So I got one guy who can explain exactly how you would lay out some bytes on disk for fastest read, etc. Then this is stuff that I never really touched anymore in school. We were already too abstract. It’s like, “Yeah, yeah, yeah, whatever. There’s some disk and it stores some stuff.” But you get still these low level guys and some of them, one of them is like, “Yeah,I helped on the C compiler back then with Kernighan.” It’s like there was one of the guys but was involved in it. And so anyway, as you know in the industry, people go around quite a bit. And so that has a lot going there.

Chris Engelbert: So from the other perspective, I mean, Oracle is known for big database servers. I think everyone remembers the database clusters. These days, it’s mostly like SUN, SPARC, I guess. But there’s also the Oracle Cloud and the database in the cloud. So how does that play into each other?

Gerald Venzl: Oh, yeah. Now things have changed drastically. I mean, traditionally starting a database software in the good old 80s where you didn’t even have terminal server or whatever, a client server. So the first version is apparently a terminal based or something like that.

It’s like, again, I never saw this. But there was a big client server push.And obviously now there’s a big push into what’s cloud and a lot of cloud means really distributed systems. And so how does it play into each other? So all the database cloud services in Oracle Cloud, all the Oracle database cloud services are owned by us in development as well.

So we have gone into this mode of building cloud services for Oracle database. And of course, that’s really nice because that gives us this visibility to the requirements of distributed storage or distributed workloads and that in turn feeds back into the product. So for example, we are still one of the very few relational databases that offers you sharding on a relational model, which is, of course, much harder than a self-contained hierarchical model such as JSON, which you can shard way nicer. But once you actually split up your data across a bunch of different tables and have relations between those, sharding becomes quite more complicated.

And then of course, it’s like we have a lot of database know-how. We also got MySQL, they do also their thing with good collaboration going on with them. So we have sort of quite a good, I want to say, brainpower, intellectual power in the company when it comes to accessing data and to writing data. You mentioned SPARC before. There’s, of course, a lot of that going on. And quite frankly, I will say even way before cloud, the fact of accessing data that doesn’t necessarily sit in a database butanalyze it or query it with SQL. It’s like you literally go back like 10, 12 years ago and everybody said Hadoop will kill every database and big data is the way forward. And I’m sure there was the same thing going on in the mid-2000s. I was not in the industry yet. So like, yeah, this notion of that you have data sitting somewhere else and you just want to analyze it has been around for a long time, actually much longer than people see now with object store buckets and data lakes and all the good stuff.

Chris Engelbert: So how does that look like for customers? I mean, I can see that smaller customers won’t have an issue with the cloud, but I could imagine that banks or insurances or stuff like that may actually have that. What does the typical cloud customer for Oracle look like? I think it may be very different from a lot of other people using Postgres or something.

Gerald Venzl: Yeah. I mean, you kind of mentioned it before. I think there is, ‘are you small or are you large?’ Right. And the SMB, small, medium business customers, the smaller ones, obviously, they’re very much attracted by cloud, the fact that they don’t have to stand up servers and the data centers themselves to just get their product or their services to their customers. Big guys are much more like ‘consolidation’and the biggest customers we work with, it’s really like their data center costs are massive because they are massive data centers. So they are looking at a more of a cost saving exercise.Okay- if we can lift and shift this all to cloud, not only can I close down my data centers or large portion to it, but of course also most of themareactually re-leveraging their workforce. So people, especially the Ops guys are always very scared of cloud or often very scared of cloud that will take their job away. But actually most customers are just thinking‘rather than looking after the servers running this good stuff, maybe in 2024, we can leverage your time for something that’s more important to the business, more tangible to the business.’ So they’re not necessarily looking so much to just get rid of that workforce, but transforming it to take care of other tasks.

A couple of years ago when we did a big push to cloud for Oracle Database and our premier database, Cloud Service Autonomous Database came out, there was quite a big push for the DBAs to transform into something more like a data governance person. So all the data privacy laws have crept in quite heavily in the last 5to 10 years. I mean, they were always there, but with GDPR and all these sorts of laws, they are quite different in what they are asking from data privacy laws before. And this is getting more and more and more complex, quite frankly. So there was obviously a lot of aspects of, ‘hey, you are the guys who look after these databases storing these terabytes and terabytes of data.’ It’s like, ‘now we have these regulatory requirements, where this needs to be stored, how this needs to be accessed, et cetera.’ And I might try to have you figure that out and figure out whether the backup was successfully taken or something like that. So you’re looking at that angle.

But yeah, so the big guys, then they, I think to some extent also very quickly get concerned of whether data is stored public cloud or not.Oracle was actually, I want to say we were either the first or definitely a forerunner of what we called Cloud@Customer. So basically you can have an Oracle cloud at your site. So you reinstall Oracle cloud in your data center. So for those customers who say, “This data is really, really precious.” You always have a spectrum. It’s like there’s a lot of data you don’t careabout, a lot of public data that you may or may not store, reference data and so forth, that you have to have for your operations. And then there’s actually the really sensitive data, your customer confidential information and so forth. And so there’s always a spectrum of stuff that ‘I don’t care can move quicker to cloud’ or whatever. And then of course, the highly confidential data or competitive confidential data– ‘I really don’t want anybody else to get a hold of this’ or ‘it’s not allowed or regulatory.’

Those systems then they look into a similar model where they say‘well, we like this sort of subscription-based model where we just pay a monthly or yearly fee per use and still all the automation is there. It’s like we still don’t have to have people looking whether the backup is successful or something. But we want it in our data center. We want to be full control. We want to be able to basically kind of pull out the cable if we have to and the data resides in our data center and you guys can no longer access it. Sort of that sense. I mean, that is obviously very extreme.And so this is what we call Cloud@Customer. You can have an Oracle cloud environment installed in your data center. Our guys will go in there and set everything up like it is in public cloud.

Chris Engelbert: That is interesting. I didn’t know that thing existed.

Gerald Venzl: Yeah, it’s actually gotten much bigger now. So just to finish up on that, it’s like, so now we have these, I mean, even governments is this next level, right? So governments come backand they say, “We’re not going to store our data in another country’s data center.” So this kind of exploded into like even what we call government regions. So, and there’s some public references out there where some governments actually have a government region of Oracle cloud in their country.

Chris Engelbert: So it’s interesting. I didn’t know that that Oracle or Oracle Cloud@Customer existed.Is that probably how AWS handled all the like AWS or what is it called Oracle at AWS or something?

Gerald Venzl: No,so AWS is different. AWS came out with outposts, but that was actually years laterandwhen you do your research, you see that Oracle had this way longer. But now I think every provider has some sort of like ‘Cloud@Customer’derivative. But now AWS is Oracle databases and what they call RDS, the relational database services.But I think what you’re thinking of is the Microsoft Azure partnership that we did.

So there’s an Oracle database at Microsoft Azure.And even that has a precursor to it. So a couple of years ago, basically Microsoft and Oracle partnered up and put a fast interconnect between the two clouds so that you kind of don’t go out of public net. But you could interconnect them from cloud data center to cloud data center, they were essentially co-located in the same kind of data center buildings. I mean, factories is really what they look like these days. So that’s how you got this fast interconnect, or kind of like buildings next to each other. And that was the beginning of the partnership. And yeah, by now it was a big announcement, you know, Satya Nadella and Larry Ellison were up in Redmond at Microsoft, I want to say was last fall, around September, something like that, but around the time where they had this joint announcement that yeah, you can have now Oracle database in Azure.But you know, the Oracle database happens to still run on Oracle cloud infrastructure. And why this fast connect is exposed via Azure.

Now the important thing is, all the billing, all the provisioning, all the connectivity, everything you do is going through Azure. So you actually don’t have to know Oracle cloud, what effect that runs in Oracle cloud, that is all taken care of. And that caters to the customers, we have, you know,lots and lots and lots of customers who have applications that run on a Microsoft stack, rather than pick any Windows based application that are in Azure, it’s a natural fit, what happens to have an Oracle database backend. And I think that in general is something that we see in the industry right now thatthese clouds in the beginning became thismassive monolithic islands where you can go into the cloud and they provide you all these services, but it was very hard to actually talk to different services between clouds.

And our founder and CTO Larry Ellison thinks very highly of what he calls multi cloud or what we call multi cloud, you know, it’s like you should not have to kind of put all your eggs in a basket. It’s literally a kind of the good old story of vendor lock-in again, just in cloud world. So yeah, you should not have tohave one cloud provider and that’s only it. And even there, we have already seen government regulations that actually sayyou have to be able to run at least two clouds. So if one cloud provider goes out of business or down or whatever, you cannot completely go out of business either. I mean, it’s unlikely, but you know how the government regulations happen, right?

Chris Engelbert: Right. So two very important questions. First, super, super important. How do I getan interconnect to Azure data centers to myhome?

Gerald Venzl: Yeah, that I don’t know. They are really expensive. There are some big pipes.

Chris Engelbert: The other one, I mean, sure, that’s a partnership between, you said Microsoft and Oracle, so maybe I was off, but are other cloud providers on the roadmap? Are there talks? If you can talk about that.

Gerald Venzl: Yeah. I mean, I’m too far away to know what exactly is happening. I do know for a fact that we get the question from customers as well all the time. And, you know, against common belief, I want to say, it’s not so much us that isn’t willing to play ball. It’s more than the other cloud vendors. So, we are definitely interested in exposing our services, especially Oracle database services on other clouds and we actively pursue that. But yeah, it basically needs a big corporate partnership. There’smanypeople that look at that and want to have a say in that. But I hope that insome time we reach a point whereall of these clouds perhaps become interconnected, or at least it’s easier to exchange information. I mean, even this ingress/egress thing is already ridiculous, I find. So this was another thing that Oracle did from the very early days. It’s like we didn’t charge for egress, right? ‘If data goes out of your cloud, well, we don’t charge you for it.’And now you see other cloud vendors dropping their egress prices, either constantly going lower or dropping them altogether. But you know, customer demand will push it eventually, right?

Chris Engelbert: Right. I think I think that is true. I mean, for a lot of bigger companies, it becomes very important to not be just on a single cloud provider, but to be just failure safe, fault tolerant, whatever you want to call it. And that means sometimes you actually have to go to separate clouds, but keeping state or synchronizing state between those clouds is, as you said, very, very expensive, or it gets very expensivevery fast. Let’s say it that way. So because we’re pretty much running out of time already, is there any secret on the roadmap you really want to share?

Gerald Venzl: Regarding cloudor in general? I mean, one thing that I should say, is likeOracle database, you know, a lot of people may say, ‘it’s like, this is old, this is legacy, what can I do with it, etc.’So that’s all not true, right? We just kind of announced our vector supportand got quite heavily involved with that lately. So that’s new and exciting. And you willsoonseenew version of Oracle database, we announced this already at Cloud World, that has this vector support in it. So we’re definitely top-notch there.

And the‘how do I get started with Oracle database,’this is also something that often people haven’t looked for a long time anymore. So these days, you can get an Oracle database via Docker image, or you have also this new database variation called Oracle Database Free. So you can literally just Google ‘Oracle Database Free’, it’s like a successor of the good old Express edition for those people who happen to have heard of that. But too many people didn’t know that Oracle Database, there was a free variant of that. And so that’s why we literally put it inthe name, ‘Oracle Database Free.’ So that’s your self-contained,free to use Oracle Database, you know, it has certain storage restrictions, basically, and then you kind of go too big as a database. And but the big item doesn’t come with commercial support. So you can think a little bit of like in the open source world of Community Edition and Enterprise Edition. So you know, it’s like, Oracle Database Free is the free thing that doesn’t come with support, it’s essentially restricts itself to a certain size. And it’s really meant for you to tinker around, develop, run small apps on, etc. But yeah, just Google that or go to Oracle.com/database/free . You will find it there. And just give Oracle Database a go. I think you will find that we have kept up with the times. As mentioned before, you know, one of the very few relational databases that can shard on a relational model, not only on JSON or whatever. So certainly a lot of good things in there.

Chris Engelbert: Right. So, last question, what do you think islike the next big thing or the next cool thing, or even maybe it’s already here?

Gerald Venzl: I mean, I’m looking at the whole AI thing that’s obviously pushing heavily. And I’mlikeold enough to have seen some hype cycles, you know, kind of completely facepalm. And I’m still young enough to be very excited. So somewhere on the fence there to be like, AI could be the next big thing, or it could just, you know, kind of once everybody realizes…

Chris Engelbert: The next not-big-thing.

Gerald Venzl: Exactly. I think right now there’s nothing else on the horizon. I mean, maybe there’s always the always something coming. But I think everybody’s so laser-focused on AI right now that we probably don’t even care to look anywhere else. So we’ll see how that goes. But yeah, I thinkthere’s something to it. We shall see.

Chris Engelbert: That’s fair. I think that is probably true as well. I mean, I asked a question to everyone, and I always would have a hard time answering myself. So I’m asking all the people to get some good answer if somebody asks me that someday.

Gerald Venzl: Yes. Smart, actually.

Chris Engelbert: I know, I know. That’s what I that’s what I tried to be. I wanted to saythat I am, but I’m not sure I’m actually smart. All right.That was a pleasure. It was nice. Thank you very much for being here. I hope to see you somewhere at a conference soon again.

Gerald Venzl: Yeah, thanks for having me. It was really fun.

Chris Engelbert: Oh no, my pleasure. And for the audience, hear you next week or you hear me next week. Next episode, next week. See you. Thanks.

The post How Oracle transforms its operation into a cloud business with Gerald Venzl from Oracle appeared first on simplyblock.

]]>
EP12: How Oracle transforms its operation into a cloud business with Gerald Venzl from Oracle
AWS Cost Optimization with Cristian Magherusan-Stanciu from AutoSpotting (interview) https://www.simplyblock.io/blog/aws-cost-optimization-with-cristian-magherusan-stanciu-from-autospotting/ Thu, 28 Mar 2024 12:13:27 +0000 https://www.simplyblock.io/?p=304 This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube, Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment, we’re talking to Cristian Magherusan-Stanciu from AutoSpotting , a company helping to cost-optimize their AWS EC2 spent by automatically supplying matching workloads with spot instances. […]

The post AWS Cost Optimization with Cristian Magherusan-Stanciu from AutoSpotting (interview) appeared first on simplyblock.

]]>
This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube, Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment, we’re talking to Cristian Magherusan-Stanciu from AutoSpotting , a company helping to cost-optimize their AWS EC2 spent by automatically supplying matching workloads with spot instances. Cristian is talking about how spot instances work, how you can use them to save up to 60% of your EC2 cost, as well as how tools like ChatGPT, CoPilot, and AI Assistant help you writing (better) code. See more information below on what AWS cost optimization is, what the components of cloud storage pricing are and how simplyblock can help with cloud cost optimization.

Key Learnings

What is AWS Cost Optimization?

AWS cost optimization involves strategies and tools to reduce and manage the costs associated with using Amazon Web Services. Key components include:

Right-Sizing of Instances: Adjusting instance types and sizes based on actual usage patterns. Reserved Instances and Savings Plans: Committing to long-term usage to benefit from reduced rates. For more information see our blog post on the AWS Enterprise Discount Program (EDP). Auto Scaling: Automatically adjusting resource capacity to meet demand without over-provisioning. Monitoring and Analysis: Using tools like AWS Cost Explorer and Trusted Advisor to monitor usage and identify savings opportunities. Resource Tagging: Implementing tags to track and allocate costs effectively. Use reseller programs like DoiT Flexsave ™ that provide higher flexibility in pricings. Look at alternative providers of certain features, like elastic block storage .

These strategies help organizations maximize their AWS investments while maintaining performance and scalability. AWS provides a suite of management tools designed to monitor application costs and identify opportunities for modernization and right-sizing. These tools enable seamless scaling up or down, allowing you to operate more cost-effectively in an uncertain economy. By leveraging AWS, you can better position your organization for long-term success.

What are the Components of Cloud Storage Pricing?

Cloud storage pricing is typically composed of several components:

Storage Capacity: The amount of data stored, usually measured in gigabytes (GB) or terabytes (TB). Data Transfer: Costs associated with moving data in and out of the storage service. Access Frequency: Pricing can vary based on how often data is accessed (e.g. frequent vs. infrequent access tiers). Operations: Charges for operations like data retrieval, copying, or listing files. Data Retrieval: Costs associated with retrieving data from storage, especially from archival tiers. Replication and Redundancy: Fees for replicating data across regions for durability and availability. Performance and Throughput Requirements: IOPS (Input Output Operations per Second) define how many storage operations can be performed per second on a given device. Cloud providers charge for high-performance storage that exceeds the included IOPS.

It’s important to thoroughly understand the components of cloud storage pricing in order to better understand how to optimize your cloud costs. This is important for several reasons including reducing redundant expenses, ensuring optimal allocation of cloud resources to prevent over-provisioning and under-utilization, allowing scalability and investing in other areas to enhance overall competitiveness.

How can Simplyblock help with Cloud Cost Optimization?

Simplyblock aids in cloud cost optimization by providing high-performance, low-latency elastic storage which combines the speed of local disks with the flexibility and features of SAN (Storage Area Networks) in a cloud-native environment. Simplyblock storage solutions are seamlessly integrated with Kubernetes and provide zero downtime scalability. A storage cluster that grows with your needs. More importantly, simplyblock provides cost efficiency gains of 60% or more over Amazon EBS. Calculate your savings with simplyblock now.

The post AWS Cost Optimization with Cristian Magherusan-Stanciu from AutoSpotting (interview) appeared first on simplyblock.

]]>
Reducing cloud costs by 30%: a case study on relational databases with hybrid cloud https://www.simplyblock.io/blog/reducing-cloud-costs-by-30-with-a-hybrid-cloud-model-a-case-study-on-relational-databases/ Mon, 27 Feb 2023 12:06:55 +0000 https://www.simplyblock.io/?p=334 The benefits of a hybrid cloud operating model are obvious: take the best of both worlds . Flexibility, fast time-to-market, cost-effective high availability and a huge variety of great platform services at a fingertip on the one side; cost-efficient operations of resource-intense, less volatile workloads combined with maximum privacy for highly sensitive data and the […]

The post Reducing cloud costs by 30%: a case study on relational databases with hybrid cloud appeared first on simplyblock.

]]>
The benefits of a hybrid cloud operating model are obvious: take the best of both worlds . Flexibility, fast time-to-market, cost-effective high availability and a huge variety of great platform services at a fingertip on the one side; cost-efficient operations of resource-intense, less volatile workloads combined with maximum privacy for highly sensitive data and the ability to process large amounts of data close to their origin on the other.

We have collected some real-world usage data from e-commerce and retails companies and found out that while customers use a large variety of different IaaS and PaaS services in the cloud, a large chunk of the actual cloud costs often go into just a few, rather basic, but very resource-intensive services. These are perfect candidates for offloading from the cloud.

A good example are relational open-source databases such as PostgreSQL and MySQL. Looking both into the cloud bills of scale-ups and at the workloads of long-standing enterprises, it becomes clear that relational databases make up a significant part of overall IT infrastructure resource consumption (and IaaS/PaaS cloud costs). In our analysis, we first looked at the distribution of average cost across AWS services of two medium-sized e-commerce companies. The cost of the cloud database service makes up 46% of their total cloud bill, with 10% for RDS storage and 36% for RDS instances:

We then looked at the consumption patterns of customers based on their mix of both reserved and on-demand instances. We further analyzed the daily, weekly and monthly usage patterns and performed a cost comparison based on a model, which provides sufficient resources (ram, cpu, network bandwidth, storage capacity and IOPS) to cover peak demand with a safety reserve on top. We prepared a model considering the total cost of ownership for running the databases on premises in a data center in relatively close distance to the AWS availability zone and providing close integration. In addition, the scenario includes a usage of the AWS availability zone as the site for database backups and disaster recovery.

The cost distribution of the new private-cloud database service, which replaced the RDS service looks as follows:

Co-location and external network costs include the racks, power, data-center networking ports across racks and management VPN. Equipment includes servers, separate storage nodes and networking switches. We assume a hardware depreciation period of 5 years and an average annual cost of capital of 6%. Backup storage is not included as it remains purely cloud-based.

Software subscriptions include subscriptions for Ubuntu with openstack, grafana (monitoring) and puppet enterprise (configuration management).

AWS storage includes performance block storage on EC2 level and backup storage. It is required for cross-site backup and disaster recovery. The replicated storage also serves as a hub for outbound data to all subsequent systems, which still reside in the cloud.

The cost of an external management and operations service for the private cloud stack would add an extra 50% on top of the reduced database cost, however the substantial cost savings of 80% on cloud-based database costs compared to AWS RDS make the overall savings still significant, as presented below.

As it becomes apparent, due to the big chunk of costs of relational databases as part of the overall IaaS/PaaS costs, savings of more than 35% on overall IaaS/PaaS costs could be achieved in this scenario.

In conclusion, adopting a hybrid cloud operating model can bring significant benefits to businesses by combining the best of both worlds – flexibility, fast time-to-market, affordable resilience, and access to a wide range of platform services, while also ensuring cost-efficient operations and maximum privacy for sensitive data.

The post Reducing cloud costs by 30%: a case study on relational databases with hybrid cloud appeared first on simplyblock.

]]>