Modern Private Cloud Infrastructure: OpenMetal | SourceForge Podcast, episode #106

By Community Team

OpenMetal is an automated bare metal and private cloud infrastructure provider that delivers high-performance, cloud-native environments with predictable, fixed costs. By combining the power of dedicated hardware with the flexibility of cloud platforms like OpenStack and Kubernetes, OpenMetal helps organizations run scalable workloads while avoiding the complexity and unpredictable pricing of traditional public clouds.

Discover how OpenMetal can simplify your infrastructure while cutting costs

In this episode, we discuss the shift from public cloud platforms like AWS, Azure, and Google Cloud to modern private cloud solutions, particularly those built on open technologies like OpenStack. We speak with Todd Robinson, president of OpenMetal, who shares insights into the benefits of private cloud models, such as cost predictability, compliance, and security. Todd explains how OpenMetal provides on-demand OpenStack and Ceph clusters, making private cloud solutions more accessible to businesses. The conversation also touches on the role of AI in cloud management, the importance of human oversight, and the evolving landscape of cloud infrastructure.

Watch the podcast here:

Listen to audio only here:


Learn more about OpenMetal.

Interested in appearing on the SourceForge Podcast? Contact us here.


Show Notes

Takeaways

  • Private cloud offers more control than hyperscalers.
  • OpenStack can pair cloud flexibility with fixed costs.
  • Open source private cloud reduces licensing pressure.
  • Dedicated infrastructure avoids noisy-neighbor issues.
  • Private cloud fits larger, steady-state workloads best.
  • Compliance needs often favor more isolated environments.
  • Migration is complex, but now more repeatable.
  • Support matters when moving critical infrastructure.
  • Hybrid setups can mix bare metal with private cloud.
  • Engineers benefit from less FinOps overhead.
  • AI can help operations, but needs human review.
  • Open source still matters in infrastructure design.
  • OpenMetal focuses on customized cloud-by-workload design.
  • The tipping point often starts around cloud spend.

Chapters

00:01 – Intro to hyperscalers vs private cloud
01:31 – OpenMetal background and OpenStack approach
04:07 – What led Todd beyond hyperscaler limits
09:54 – OpenStack, bare metal, and hybrid setups
14:04 – Best-fit companies and workload tipping points
16:23 – Compliance, security, and isolated environments
20:18 – What migration to private cloud looks like
23:28 – Support, project flow, and engineer access
28:43 – How OpenMetal uses AI internally
31:50 – Human-in-loop and AI risk in infra ops
34:55 – AI as a force multiplier for small teams
40:09 – Trends ahead: AI sysadmins and red teaming
45:09 – Open source questions in the AI era
53:09 – Who should consider OpenMetal
56:22 – Where to learn more and get started

Transcript

Beau Hamilton (00:01.24)
Hello everyone and welcome to the SourceForge Podcast. I’m your host, Beau Hamilton. Today we’ll be taking a closer look at an alternative approach to cloud infrastructure. For years now, many organizations they’ve relied on what’s called as hyperscalers for their public cloud related needs. These are the platforms many of us know, AWS, Azure, Google Cloud. These guys control over two thirds of the global cloud market. But as companies scale and workloads grow, the costs grow with them, know, causing some teams to really start rethinking their choice, especially as more competitors and alternative options become available. And one approach that’s sort of been picking up steam, I would say in terms of popularity is the modern private cloud, particularly platforms built on open technologies like OpenStack. And the idea behind this approach is that it combines the flexibility of the public cloud with the control. And I would say the predict, the predictability of infrastructure dedicated to specific tasks.

So you don’t have to have that noisy neighbor effect that kind of drags down the performance. You have more predictable pricing model, usually fixed monthly fees. And you can better comply with your industry’s various compliance and security requirements, among other things, right? So to talk more about the private cloud model, we have Todd Robinson, President at OpenMetal, home of the first on-demand open source private cloud.

So I’m really excited to get into this conversation. Todd, welcome to the podcast. Glad you could join us.

Todd (01:30.68)
Excellent. Yeah, bro, I definitely appreciate it. I’ll add on to that intro into the private cloud space versus the public cloud. But yeah, I’m happy to kind of start off with a little bit about OpenMetal, which may give some background to whether I most of what I know that I’m talking about is something based upon history versus on philosophy. So a lot of experience built up over 25 years actually, honestly, serving hundreds of thousands of customers in our years, both in this brand and then in other brands that I’ve been lucky to be a part of, be a leader or be a founder in. But 25 years in the open source space and for us, we had always loved OpenStack and we had loved this idea of being able to own your own cloud. And so for OpenMetal, we said, hey, can we actually, make this even more accessible for end customers. And so one of our big missions was to say, how do we take a system like OpenStack and Ceph and other really large but mature open source systems that usually would be considered pretty tricky for people to get a hold of as a relatively small business. And small business doesn’t mean, you know, the three to five kind of people, but this might be in the 40 to 100 range, something like that as a smaller business.

And yeah, so for OpenMetal, we took on that challenge and years ago created an on-demand OpenStack and Ceph hyper-converged cluster. So you can go in, hit the button, 30 seconds later, you have this with all the underlying pieces. The Ceph is set up right, OpenStack’s linked to Ceph properly, the redundancy is there, the ability for one of the servers to die and the networking to stay running. So the HA was all accounted for.

As you can imagine, that was quite a trial and anybody who knows that space knows the complexity of that. And obviously we leaned on other greats that were in the space, but we were able to combine that all together with our hosting model and OpenStack and Ceph expertise to be able to produce that. And so that’s something that people can actually today go in, hit the button, three servers and have a full OpenStack and Ceph 30, 45 seconds later. So a little bit of magic in there.

But I’m really kind of illustrating that just because that is our background and why we have some opinions that we hope and we do believe that others can benefit from and being able to see how this private cloud system has come together. yeah, so yeah, sorry, but I’ll pause there and see what else you have for me. And then if you don’t mind, I’ll add to the reasons why private cloud may fit with some companies.

Beau Hamilton (04:07.19)
Absolutely. Yeah, thanks for giving that overview. You’ve given me lots of material to ask you questions about and dissect some more. I want to start with your experience. You said you’ve been in this space for 25 plus years. You’ve experienced the the growth, the adoption phases, all these different, the growth of the hyperscalers, and you’ve refined your approach of doing kind of everything sort of in-house or like having more control over the infrastructure. Before I get into some of the more technical questions I have for you, I’m curious, was there a moment where you started to sort of notice maybe the limitations or the trade-offs with some of these bigger hyperscalers that sort of led you to build something like OpenMetal?

Todd (05:01.864)
Yeah, in our case, my original roots are from a hosting company started in 2000, 2001, long back in the days. And in those cases, the automation that the big cloud providers really made popular didn’t exist at the time, right? So we had to make all that stuff our own. And we were always building on open source. So that really started like this open source love and attitude towards that to be able to share back and not be the only ones that are going to benefit from that. And yeah, so we had many years where we ourselves had to figure out our own virtualization, figure out how we were going to do VPSs and networking before the advent of even OpenStack itself, where it started to come together on Rackspace and NASA. And then, of course, the advent of the public clouds, where they really said, hey, we’re going to take all that stuff because underneath their hood is all this open source stuff too, as you might imagine.

They were going to take that and really say, yeah, let’s make it a lot more usable, user friendly. And this kind of cloud native idea was born, which I agree with and I love. And I think that you really, once you’ve kind of gone to that where you’re like, cool, I just call an API, I get my resources, it’s on the proper network, I can set up my firewall rules, etcetera. And it’s all like even infrastructure of code. If you’ve graduated to that spot, I would say you want to be there. Like you really don’t in this kind of concept where you’re like, hey,

What’s some of the positives and negatives of having a private cloud versus a public cloud? Kind of going back to how you started this off is that you really do, I would suggest, you stay relatively cloud native. Definitely AI has helped people go, I can just run it on bare metal because I can ask my friendly LLM to make me the stack. Yes, you can, but that problem’s actually already been solved already quite well. And in fact, it’s better to probably use your AI to talk to the cloud APIs like ours.

We definitely encourage people, like if you’re really into AI, here’s our, in our case, our OpenRC file, and you can just tell your favorite LLM, hey, go ahead and control the infrastructure with this. So this, would definitely encourage people, don’t need to replace the AI, the cloud native approach with an AI native approach that goes all the way down to the hardware level. You can though, of course. And in our case, here at OpenMetal, we do bare metal as a service, as well as this cloud as a service, as well as storage clusters, essentially as a service.

Now, in our case, we’re typically selling to relatively larger organizations that might have a spend maybe on the public cloud themselves, 30,000 a month to 200,000 a month or something like that. And so there’s typically quite large workloads that they’re dealing with. And in that case, we really work with them over time. Nobody just wakes up and says, I’m going to move $200,000 worth of per month of infrastructure usage from cloud A to cloud B. As much as you would like to do that, there’s definitely the times I get the call from the CEO who’s just like, I am done with this. I no longer want to work with them at all. They did this or this, or I tried to buy support from them.

Like one of them, like you had mentioned, oftentimes customers would come from us and why sometimes they may be leaving the public cloud is the service level is not something that is okay for them. Like they need a higher level of service. And sometimes the big clouds, the mega clouds, they might say, well, go to a service provider and then you’re still going to be on us, but then they’re going to give you that support. That’s okay for some models. But then you start piling the service levels cost on top of the cloud costs. And then all of a sudden you might’ve made something actually even worse. And so like in our case, oftentimes we’ll get people that are looking for the service and it’s not exactly the issue with the cloud with AWS or something like that, the issue is they couldn’t get the support level. The dollar figure that they start their like higher end support levels can be really high. So yeah, so there’s some of those things.

Beau Hamilton (08:56.11)
Yeah, it turns into a sort of full-time job trying to migrate and consider the alternative. Like, I mean, just talking about the, yeah, the dealing with the customer support angle. And I want to pause you there for a second and ask you, and just, I’m glad you provided sort of that overview because I think that you have these two sort of worlds. You know, you’ve got the large public clouds and the closed platforms were the biggest, you know, platform companies, you know, dominate the space. Then you have the alternatives. You mentioned it’s built on, your solution is built on OpenStack. And you’ve also mentioned how clients work directly with bare metal. Could you just kind of explain and describe those two terms and kind of foundations?

Todd (09:54.92)
So in our case, when we deploy a cloud, it’s going to have all kind of your normal things. Like, hey, give me block storage, give me object storage, give me a VM this size, or hey, in our case, since we actually customize the cloud to the company or to the workloads that they’re going to be running on there. So they may even say, like, I want a flavor available of a VM that’s very unique, but I use this all the time for this certain workload or this database type.

So in our side, we’ll tend to meld together what’s appropriate for the customer. So in some cases, they’re going to say, well, I have this huge Clickhouse server. Clickhouse is great at running its own bare metal, and it likes to have direct access to the hardware. But they also want other VMs and kind of this ease of use of the cloud. So in this case, what we actually deploy is a hybrid system. So we’ll put the dedicated servers that bare metal inside of the same networking, like the VLANs, etcetera, so that it’s accessible trivially on a private network to the cloud so the cloud can access it, act as it’s overwatch. There’s certain systems sometimes that are better run on the VMs, but there’s definitely especially big data type stuff where just take the bare metal, just use that. You don’t need the noise of the VMs inside of there, particularly for systems like that. Clickhouse is a great example where it loves being on the bare metal, managing itself, it’s HA by itself, it’s already built into the system, but it also likes to have access to other resources. So for example, we can provide ClickHouse a storage cluster, a Ceph-based storage cluster that it can use as cool storage, natively itself.

So it can talk to Ceph over an S3 compatible API in order to push the storage over there. So we would talk to a customer and say, tell us a little bit more about that workload because with us, have that opportunity of customizing your infrastructure and your cloud to your workload instead of the reverse, right? You go to the big clouds, you are going to do it their way. That’s it, right? Because that’s all they’re going to offer you able to do that. And so you have to conform to their systems when you do that, which in some cases, it’s counter to the performance level of the systems that you’re actually running on there, the workload you’re running.

Beau Hamilton (12:11.094)
It’s counter and they’re and they’re probably potentially charging you more for things you’re not necessarily utilizing. Is that also true in some cases?

Todd (12:18.552)
Yeah, you know, we’ve seen a lot of like this is and the people working in the kind of the FinOps world of these larger, these companies running these really large workloads. I you open up that, that skew list of like what they paid for and you look at IDA, they might know it. And if they’ve taken the time to go in there and tag it properly to say, well, this department has this thing and this thing has the sub thing.

But it’s really something. To be honest, yes, I’m sure that happens. They get stuff that they don’t want. But sometimes you can’t even tell what you got inside of there. So this is one of the fundamental differences too of the, when it’s your own cloud, you have a sunk cost level and our system will tell you how close are you to that. But since the sunk cost is gonna be considerably less than what their normal bill is anyways, you don’t have to like worry about.

You don’t have to track it anywhere near as fine grain. Your engineers don’t have to go like, shoot, can I spin that up? Or is it going to, am I going to get yelled at by, you know, somebody out of the finance department next month when I do that? Like they don’t have to deal with that. So some of the mental drain that is not engineering gets to come off the engineer’s plate when they use their own cloud. So yeah, there’s a few, there’s a few of these things that are cropped up a lot for like, what are the customers really value about the platform like this?

Beau Hamilton (13:40.216)
So I think, yeah, the value of just having a customized approach where it’s custom built to your specific workloads, I think, speaks for itself. But I’m curious, is your solution meant to serve a broad range of use cases? Or is there a particular type of company or industry where your solution tends to be the best fit?

Todd (14:04.866)
So in our case, it is a whole, it’s a cloud. So it’s going to have a lot of these tools that you would expect. So yeah, it’s easy to call and have block storage of a different performance level. You can have object storage, like really cold, very low cost stuff. Yeah, you can have different CPU performance levels. So based upon what your workload is going to be, but you usually would find the companies that get the most value out of customizing their cloud to their workload typically do have pretty decent size workloads. So it could be big development departments. It could be, you know, big disaster recovery systems that they have to run, but it’s where they’re pretty sizable. There’s a definitely a tipping point where stay on the public cloud. Like it’s the greatest deal ever because you get access to so many of these things as a relatively small organization with a relatively small spend. And so that’s why I say there’s definitely a tipping point, we call it.

And yeah, it has to do with the workloads and the scale. And so for us, our prior happiest customers would have been somebody that might have been on the public cloud at 30,000 a month to 200,000 a month, something or bigger. And then when they come over to us, maybe it’s half that and it’s a higher performing system. And there’s a little bit less brain drain of like having to wonder like, can I hit the button and actually use that VM or am I going to be over my limit?

Yeah, so there’s definitely scale involved before you really get the benefit. But we also have customers who are just like, I love OpenStack. I don’t want to be on AWS. And our European customers often will start the conversation that way to say, hey, I’ve had it with, I hate to say it in this way.

Beau Hamilton (15:43.106)
Yeah, the kind of the data sovereignty sort of angle, the philosophical angle that kind of plays into it, right?

Todd (15:47.676)
Yeah, the philosophical thing. yeah, and just the, you know, regionally around the world, as this type of technology has matured, it’s opened up more doors for people so they can make choices about the brands that they want to do business with. And yeah, I would just say, you know, whenever, you know, like I grew up with AWS, it’s hard for me to kind of, I know what they do too. And I also go, okay. But I grew up with them, you know, so that it’s the brand sits in a particular spot, but that’s not the case in all parts of the world. In many cases, they’ve grown up with a negative impression of a brand and they were just trying to get off of it for many years. So yeah, there’s a lot of pieces that move around in that decision making.

Beau Hamilton (16:23.532)
Well, and also you have that kind of perception of a brand you want to potentially might work with, but also you have, there’s also the compliance and like the security side of things, which is incredibly important to prioritize. I think a lot of customers, you you work with industries that have strict regulations around what they can or can’t do with data. have maybe it’s personal health records, you have financial transactions. I think of like the HIPAA with healthcare, GDPR, like you mentioned with some of the European companies you deal with. How do you navigate those challenges and those concerns for current and as well as prospective customers?

Todd (17:06.994)
Yeah, so you need to have the standard certification socket, etcetera. And so that’s like kind of a given that you have to have on your checklist. And then for us, since it is fundamentally a private cloud, there is a whole nother layer of control that they can have. And in fact, with a private cloud, not all of them, but many of the customers, we don’t have access to it either, right? Because they would only selectively add us to it to give support on something that they may need or do some upgrade of some sort. Yeah, and so their option is there’s really no access to those systems. They have the freedom to broadcast their own IP set, we’ll route those things for them so that from the outside world, they’re an autonomous bubble. They’re all running on their own machines, they’re on their own VLANs, they have their own routes, they have all their own stuff.

And so it actually allows them to make choices that sometimes they couldn’t make or that were prescriptive that they didn’t like actually with the public cloud. So we have a set of customers that came in because they have these compliance type issues or the public clouds are not friendly. So for example, we host like a fantastic threat intelligence company, hunt.io actually, I can throw out their name, they’re public about it. And it’s because they themselves have a pattern, a usage pattern on the internet that would look suspicious to someone who didn’t know their customer very well. Like in an arc, because they’re out there scanning and looking and trying to find their threat hunting all the time themselves out of these systems. And so, yeah, as you can imagine, you might look suspicious to a big cloud company who doesn’t actually know who you are, right? They don’t know who you are. So you’re just going to fall in that bucket that says, turn that thing off. And then, wow, their business is paused for a day as they’re trying to get a hold of somebody to explain. No, no, it’s not.

We’re legit like in here, look, the government signs this and we have this and etcetera. So in our case, we can also be much closer to the customer and also again, get out of the way so that if they need to be public facing and really their own entity, we can do so. So we often will find customers that have said, I need to get out of my aging co-location or data center. And I would like to shift it to be a little bit less having to be hands-on with hardware and do CapEx but they saw the writing on the wall with public cloud and said, hey, I know what I will get there. I might be happy for the first couple of years, but after that, I won’t be so happy. And so we’re also very effective for that type of customer as well that wants to be a little more distant from the hardware, distant from the data center, but wants the cloud native and wants the cost and the control and routing their own IPs and everything. And they may have some of these.

Hey, I have to have some specific hardware, for example, some networking gear that might be dictated by the government or by their baking partners that we can accommodate. So there’s some things too that just as a private cloud company, we can accommodate that the big public cloud, they won’t take on that edge case.

Beau Hamilton (20:10.808)
So let’s say customers are sold on the idea of moving away from the proprietary cloud platform for all the reasons you mentioned. What does that transition look like? I imagine that it could be quite the headache when you make that decision. And if you don’t know what you’re doing, you don’t have the right resources, it could get kind of ugly pretty fast. How does that transition typically take place?

Todd (20:37.5)
Yeah, it’s pretty interesting. Well, one is there’s data gravity. So it depends on there’s some basic math stuff that you’re going to have to deal with. It just says, OK, I got a petabyte of something here that has to end up over here. That’s going to take a bit of time. Or if I’m kind of like cloud native infrastructure as code, just with us, just like any of the other cloud providers, Terraform, you could point it at GCP, that Terraform with a little bit of modifications, can be pointed at our cloud as well. And it’ll recreate the infrastructure here. And so depending on where you are, you might follow a different pattern. So the infrastructure’s code is a more straightforward one. If you have, let’s just say I’ll pick a number. You have 1,000 VMs, and they’re all relatively unique. They weren’t created with infrastructure’s code.

They’ve been long running items. You got a quarter of them are Windows, and the rest are some Linux, and some different flavors and they’re each kind of in their own networks. And in that case, we’re going to use one of our partners. So we have a couple partners, Trilio, Highstacks. These are companies that specialize in these moves and migration systems. And so basically what they do is those software, they look at the infrastructure that you have, they basically kind of make up a little bit of an infrastructure of code playbook and then point it over at ours and say, okay, this is what’s got to be created.

And then they fire up these migration agents that essentially will mirror the data across like block storage to block storage. And then at one day, you just say, OK, we’re all done. Over here, those get turned off. These get turned back. These get turned on, and it takes over. You can even do that through like a DR process even, because DR is kind of doing the same thing. A disaster recovery process is kind of doing the same thing. But yeah, it could be.

Beau Hamilton (22:22.78)
You make it sound so easy. You make it sound so simple.

Todd (22:25.8)
You know, it’s not because underneath most of the hood, there is a bunch of open store stuff, V2V, I’ll just throw out the name, people can look that up separately. But this has been around for a while. So yeah, if you need to get out of VMware, know, it can take it out of VMware and stick it into OpenStack. If you need to get, don’t recommend this, you need to get out of VMware and go into one of the big public clouds. These systems can accomplish that. So a lot of the software has matured.

It takes time and it takes project management. Like it’s a different skill set. You just gotta say, yeah, I gotta move this amount. You can’t just flip a switch and do that. You’re gonna take some time to do that. Make sure everything is working correctly. Set up your recommended number of customer or client VMs that are gonna be moved each night. And you’re gonna take it on over the next 60 days. And our team is talking to your team and they’ve got, you know, maybe they’re joint slacks. We’ll generally have joint slacks with somebody or joint teams if we run into that. And so that, yeah, the teams are talking to each other, but it’s pretty prescriptive, honestly, nowadays. Yeah.

Beau Hamilton (23:28.014)
I was going to ask, what does your support system look like? Because I was going to ask if you were mostly concerned with the hardware, but it sounds like you have a tight support group that’s helping these clients migrate over and work with them with a learning curve or any sort hiccups they might experience.

Todd (23:49.992)
Yeah, that’s we were born out of myself very much this as kind of born out of like being open source oriented and being customer service. So, and I always, you know, this is the flavor of OpenMetal. So I guess you could say part of why you might buy from us or also part of why you might not buy from us is like, we don’t have this like really controlled. Your engineer can only talk to our engineer for no more than, you know, 18.5 hours or you get overbilled, you get build something. It’s like, Oh guys, come on. up, engineers generally respect that.

They don’t like using other people’s time a ton. So we really just like put your engineers together, put our project people together, have a regular cadence where there’s a comfort level that builds up so that like if you’ve got a problem, you can call that person. That person knows who you are. They saw you on the screen. So we don’t really have, we like to choose customers that are at a scale enough where they are comfortable with that, where they know, hey,

I am going to be able to spend time with you, but I’m not going to overuse your time. I’m going to get your expertise that I need in that area, or you’re going to help me do this. But yeah, so for us, like we really like to have the engineers talking to each other over a private system. is our kind of favorite for us. That’s always proven really effective because then they can get the answer very quick. Or a lot of times they just send the question off and they know in a couple hours when they come back, they’re going to have an answer today.

Emergency stuff is different. Emergency is like they hit an emergency button, the emergency button hits the team that needs to because there’s SLAs associated with how much time we want. They need us to respond. Yeah, yeah. So there’s definitely different workflows for the type of thing that you need.

Well, it’s like, time is money. I think everyone is aware of that. if you only look through that lens, it starts to break down the relationship you have with clients, the overall just effectiveness of everything. I mean, you can’t just whittle it down to like, you know, you get 10 hours with this client to help them with their, you know, XYZ problems. And that’s it. And the rest, we’re gonna start billing you. That just doesn’t, the real world doesn’t work that way, right? So you having some leniency just ultimately just helping solve the problem at, you know, no matter the sort of the cost. If it takes you a little bit longer, if you’re able to do it more efficiently, that’s great too. But I imagine also you’ve probably had a lot of efficiency gains too with maybe some of these tools, some of the LLM information and tools out there that allow providers and also maybe some of your team to tackle more problems more efficiently.

Beau Hamilton (26:27.662)
Just because there’s so much more information accessible now than there ever was.

Todd (26:32.144)
Yeah, two things in there. you don’t mind, I’ll kind of dovetail onto the AI and how it may help people or my thoughts on it. But one of them is this open door policy. I don’t know what we want to call that. But one of the things that often is forgotten is we learn often just as much as we are teaching or being part of their experience. So for example, we have a wonderful, probably flash bots.

They taught us more about TDX and confidential computing than our team knew at the time. And so sometimes you get to be close to these edge case people that are pushing the boundaries, that they’re doing something that others can’t do. So they have to come to somebody like us, because you can’t do that stuff on the public cloud. The public cloud is going to dictate some of these things. And so by that policy, we got a chance to learn a ton about that business. And in fact, it turned into something where we were able to help others with that same technology and we had learned from them because we took the time to listen and realize like, you know, that’s great. Other people are trying to do this. We can be part of that ecosystem also. And so honestly, I think our engineers are, they’re a huge part of our success, both because they’re available and because they listen. And we have this kind of philosophy, all my businesses that I work in, honest desire to see somebody succeed.

If you just stop for a minute and go, you know, I guess, let me help you. Let me actually try to listen and understand where you’re going and I will help you. I will get you there. And so yeah, for me that allows us to have that mentality as a business, but knowing you have to learn a bunch of stuff. And definitely there’s some times where you’re like, boy, I don’t know what to do with that new knowledge I have. Like that’s, I hit that limit there. But other times you open and go, that’s awesome stuff. We should do more of that.

Beau Hamilton (28:10.826)
Yeah, I love that. I love that. That’s a great philosophy to have, think. that’s just kind of something that you’ve crafted from your experience working, you know, as with your in your hosting, you know, provider days, customer support days, I assume you mentioned you were customers for it. Yeah. I think that, I think that shows and that the importance there of just working closely with and developing that kind of relationship. I think it’s just so, so important and valuable.

What was the, you mentioned you utilize some AI tools. I’m curious to hear kind of how you’re utilizing some of these in the space.

Todd (28:51.482)
Yeah, so we’re just like I think everybody, you’re kind of learning what are the right, I call them rigs, I don’t know what they really are called. Some people say harness engineering or something like that. this is the where, for us, since we come from a systems background, we come from a development background, often our rigs are cloud code with a bunch of skills. And since OpenStack and Ceph and all these have been around for a long time, well documented in their API first, they’re very API driven.

It’s really straightforward to be able to use one of these rigs to say, I want to set up infrastructure. And it already knows how to talk to the APIs. And in fact, don’t even need, sometimes people would say, do you need an MCP for that? Being able to give context to be able to talk to that API. The fact is you can write a little bit of code trivially with your LLM against the API and run it. And you can actually make that your default behavior of your rig is to make code and then run the code against this to perform that behavior. And then, hey, if I like that, then I’ll just save the code. And then, yeah. And so we’re definitely not an expert and somebody else like, well, how AI enabled are you? I mean, we’re not there yet. Like everybody’s kind of learning this and when to apply it. But I’m also a big fan of like, use it, but don’t use it to get in between you and the customer, your client, maybe also using it, but don’t use it to get in between.

You could use it together. You can use it for systems to enable yourself. But for me, it’s honestly like I use it to have in a lot of our team members to have more time with the customer understanding what they’re doing or just catching up and understanding, you know, maybe where they’re headed. So, but yes, we definitely are quite skilled, but we haven’t picked our exact path with it yet.

So yeah. And so we use it for all kinds of stuff. We even use it where the new generation of our own website that’s coming out is built with a cloud code modified for design skills of all things, right? So not really directly in our space, but we’re very skilled with these rigs. But yeah, so we’re starting to apply it for management purposes, for exploring the clouds, to understand what’s in there. So instead of a staff member having to go in there and say, you know what, something’s not acting right, let me spend 30 minutes with a exploratory bash somewhere, etcetera, right, a little bit. You can actually just get it to do that. But you always want to be careful whenever you’re going to be doing, when we don’t do this, is having it do potentially destructive things for you. Having it go and look up and explore, definitely. Destructive, careful.

Beau Hamilton (31:20.206)
So that’s the thing is, that’s the thing that I’m glad you mentioned that because there was a, you know, a recent report, I believe is yesterday where, you know, Amazon was having a, you know, all hands on meeting, kind of talking about some of the recent outages attributed to AWS, but it was attributed to basically pushing sort of vibe coded updates without proper sort of review that were, you know, I mean, at least that was sort of the, the, the rumor about, and how it was starting to sort of wreak havoc and ultimately cause some of these hour long outages, which if you’re a big hyperscaler, that’s some serious headaches, serious money upset customers. yeah, think it’s a good point you mentioned is like having that sort of prop review, knowing sort of the limitations and not to go sort of all in on some of these tools before they’re maybe ready, before you have a good sort of, you know, of framework for how you plan to validate and check and review the work. I was going to ask too if you had sort of some agentic AI components that are like kind of going into the metal, like flipping switches, so to speak, and executing certain tasks?

Todd (32:35.272)
Yeah, no, no, that’s like he said too, and especially some of our systems, you know, like a botched network update can be really problematic because it has a huge effect across everything. And there’s definitely pinch points in all cloud systems where, you know, there’s a lot of dependency on that area. And if there’s a problem in that area, yeah, so you really need to.

Walk carefully into that. And for us, no, we don’t. We have a lot of human in the loop, that we’ll call it. And there’s no, at this stage, no potentially destructive stuff that an AI would do by itself. Exploratory things. And then again, for our own skill sets, we’ve just been building up with things that, yeah, if we roll a botched update to our website, that’s not gonna hurt a customer. That hurts us and our sales, right? Like, so we’ll fix that. But it doesn’t have that level. So no, you need to take a lot of care with that.

And so for us, we’re stepping into it with like exploratory things, things that it can go and collect for us that would have been time consuming, data sets that it can analyze, but not taking actions that could be potentially destructive at the large scale. And some of those things have been solved already, like updates and things like that for OpenStack, they’ve been solved already. Just follow that pattern and it’s not particularly time intensive to do so. Yeah, so apply it carefully. I would just say.

Again, I’m a huge fan of it because I feel like for AI, like at its most fundamental level has given the small and medium business like this huge leg up on some of their larger competitors. Cause you could never have specialists before that could do this level of marketing or specialists that could do this very fine grain stack, like that knew exactly how to do that. And you’re able to now tap into some of those skillsets that are inside of AI.

And again, I think it’s a playing field leveler for smaller orgs. And in many cases, that’s like our company, that our clients that we’re helping, they’re going after their bigger competitors. So I’m like, this is a boon for you. Like your team of five, that’s all you could afford. They can now do twice as much as they did before. Now you’re going to have to figure it out. You got to walk into it and you have to give them time to learn. And this is my view. I know a lot of companies are like, great. Well, then I can just have three people do what the five people did before. And I’m like, yeah, you could. But what you could do, which would be even better is have those five people do even more that lets you up your whole game and go against that bigger or that you always looked at and said, I wish I could just be a little bit bigger. There’s a book, what was it?

Crossing the Chasm where they talk about this like Valley of Death where you as an organization aren’t quite at the size that you need to be able to take on these harder problems or these bigger systems that would normally have to require more people. This is a funny thing, is AI shifted that whole crossing the chasm value of death thing. It just literally said, actually, your person, your team can now cross that by itself. But it’s going to have to take a little bit of learning. It’s got to step back and say, hey, look, I’m not going to be a software developer anymore. I’m going to be a product engineer.

I’m going to know the product and I’m going to use my AI systems to help me build that thing. But I get to build it faster so that I get to try more features, try different approaches. get to ideate and build those things more quickly. And so yeah, anyways, I’m a huge fan, I really, if I could share anything, it’s like really, I really feel people should say, no, I have a team of five. They’re going to be able to do greater things.

Not that I had a team of five, now I can have a team of three. I think that that’s a super dangerous thing. You don’t want to approach it that way. Some companies will absolutely have to end there. There may not be any choice. don’t get across that. But I would really start with leverage your team to help them do more. Give them time to adjust to this and help them adopt these new rigs or AI enabled stuff.

But anyways, that’s a sorry, I know that’s a totally other subject. has nothing to with OpenMetal. Yeah.

Beau Hamilton (36:47.532)
Yeah, I think, I think that, no, it’s, love this. I love this conversation. think that’s my favorite part. Just kind of think about kind of the bigger picture of it all and what it’s doing. I think a lot of the companies that are sort of making the cuts in the name of, of automation and AI, I think they’re just, they’ve kind of had, they needed to make the cuts well before these, these AI tools kind of came down the road. They’re just kind of using that as a, as an excuse, so to speak. think that they were sort of, some, they were kind of over, they had some workforce issues they had to deal with well before this. And maybe they’re hurting in certain areas. I don’t think that, I think what you’re saying too about there’s a learning curve associated with, know, some of these tools are able to, I don’t know, replace or automate some tasks that humans were doing. So that kind of the people that were doing those tasks have to go out there and learn and pivot a little bit. But the bigger picture of it all is like, there’s always, always more things to do. mean, it’s like, there’s always a long laundry list of, of tasks.

And so if you’re able to automate a lot of the tasks and allow you to spend your time executing things that you’re planning on doing and still need to do, it just makes everyone a lot more efficient and it unlocks more growth. It unlocks more, you know, discovery, just the ripple effect of that, of democratizing knowledge and tools is exciting to think about and about how it’s going to play out in a big picture. You know, I think it’s a I think but I think having a kind of back to the what you’re saying about, you know, learning, having some helping helping transition that your team members into potential new roles, helping with that learning curve. And then also having a human in the loop with some of these tools that you do utilize to make sure everything’s going according to plan. I think that’s all a very good approach. Yeah, and I’ll have to look into that Crossing the Chasm book you’re mentioning.

Todd (38:56.134)
Yeah, it’s from some time ago. And I wish I could quote the author because yeah, it was formative for me because I’ve had many businesses and still run quite a few. And there are these defined times between the small and medium and large. sometimes you’re trying to take on bigger problems and you don’t have the scale and the size or the skill sets to do so that this reset that whole thing. Now there’s still this crossing the can, but now, but it moved around your value of death. That’s a term that they use where it’s kind of hard to get out of.

But yeah, check it out. Crossing the Cast. Boy, I better look that up and make sure I’m even quoting the right title even. Yeah, thank you. Thank you.

Beau Hamilton (39:28.992)
Yeah, we’ll have to fact check that. Along the same lines of AI and this technology that’s been making a splash, everyone’s talking about, curious, what are you looking forward to in terms of new developing tools, technology trends that are coming down the pipeline in the next, let’s say, you know, six months to a few years. Is there anything you’re particularly watching that you’re looking out for that it could cover? Could it have something to do with AI or it could just be a trend specific to your industry?

Todd (40:09.82)
You’re definitely going to see more AI system administrator type thing. So AI has done a really good job with development and it’s allowed developers to become product engineers instead of necessarily being directly in the code as much. So I think that that’s going to form, you’re going to see more things that are associated with system administration. And I would still suggest though, like you’re always going to look for a human in the loop spot where your system administrator, AI buddy or whatever is going to write you Terraform and not just go apply it. Like that’s a dangerous spot. You need to always pick that spot where the human loop is gonna be. So I definitely see that coming through. You see design starting to really take off where AI is getting better at design. A lot of these tools that are popping up, pencil.dev. this one’s kind of funny. There’s two of them that are kind of cutting edge in the design world that’s kind of bonded with cloud code. One’s called pencil.dev and the other one is paper.design.

So pencil and paper who are not related at all are definitely competitors have decided to go that route. Very creative, I guess, that it all kind of worked out for them. So yeah, I definitely see rigs, I call them, where this is a specific kind of in our space. So maybe my view is colored by this, is that if you want your AI to have superpowers, you also have to give it infrastructure. And so you’re going to want it. And so in our case, we do this with OpenStack. So you’ll have a project that’s called an OpenStack and it has access to a bunch of resources, block storage, object storage. has everything inside of their networks, firewalls, VPN as a service, etcetera. But so what you can do is you can stick your AI inside of this and then give it the freedom as it needs to make things. It’s already inside of this private space. It just fires this stuff up. So I do see a new set of workloads that are gonna come through.

And whether it’s the IT department and companies that administer it, or maybe it’s the department itself, like a development department, where they have to give out these rigs and the surrounding infrastructure so that the rigs can be just be really efficient to say, well, I need to spin up some, I’m pulling some unknown data down or I’m pulling some unknown code down. So I need a temporary VM to go try this first to make sure it’s not polluted with something. Okay, cool. I can just spin that up, try that out. That’s good. So I definitely see that.

Beau Hamilton (42:31.17)
But also make it secure and private to those specific resources, right? In a bubble, yeah.

Todd (42:34.28)
In a bubble, in a bubble. And that’s why I always say like, yeah, you want to keep, you’d be surprised how that thing will start going and doing stuff that you don’t intend it to do. Our red teaming stuff I’m really excited about. So in this case, like security is always important for everybody. AI is definitely being used by the attackers, the real ones. So in our case, like enabling ourselves and enabling customers with greater knowledge and technique to make sure their infrastructure and their code stays safe.

So we’ve got some work that’s going on with red teaming. So in that same case, right, that your red team skill or AI can be inside of your bubble. And so as you’re producing this code, you can also say, hey, it each major step point, I want to spin off a red team attack of it. And it can look at your code. So, and then, and then also attack it as if it was an external thing. So I see a lot of that stuff coming up where like as a smaller org, you didn’t used to be able to afford like a crack red team to go after your stuff. Well, guess what? You can now.

Now you still probably should go with an official red team, like a specialist company, once you’re at a scale where your stuff is gonna be attacked that much. But there’s all these new skills and tools that are coming that again, for us, I think it’s wonderful because it’s an equalizer. It lets you do some of the things that the big clouds used to be able to do with a team of five. They go, go build a red team system that you point and click and do this and do that. And they had the five people to update it. We now could never do it before. Now we get to do it with a great sysadmin who’s AI enabled, I guess you could say, in order to provide this.

Beau Hamilton (44:07.182)
That’s really exciting. Yeah. And it’s also exciting too, for, for your, for yourself and you know, your team, the engineers, like you have all these new sort of capabilities, these new tools to play around with and, capable, yeah, just capabilities you’re able to do and work on and experiment with and tinker. And I think just from a fundamental, quality of life standpoint, and like, you ultimately want to enjoy what you’re doing, enjoy your job and, enjoy your, you know, what’s your, and I think for me speaking,

I’m just, I’ve really enjoyed tinkering with a lot of these tools and how they’re able to fit into my workflow. And so I can imagine the same could be said about what you guys are doing over with the private cloud. So yeah, it’s, you’ve given me a lot to think about. My mind is just spinning with, know, possibilities of how this technology is going to impact your space as well as the greater, you know, hyperscalers, but just the tech landscape and society at large. There’s a lot to think about.

Todd (45:09.382)
No, I tell you, one of the things that I know we’re pretty far off of OpenMetal, so we’ll come back to that maybe in a second. But for me, I really am also kind of, I don’t know, existential crisis if this is the right thing. But I’m a big open source person for many, many years. But open source was always kind of the reason it exists is to, well, part of the reason that it exists is to allow this kind of combination of resources to produce software and systems that because they were so expensive to build that you needed to build them in the public domain, essentially, to allow all these companies to do this, have their use cases that would bring together this like wonderful, rich open source system.

But for me, like I don’t know how to process this yet. So I’m always curious what people think about this. But now that code is relatively inexpensive to make, it kind of removes one of those reasons that open source existed and one of the driving like monetary reasons. you always want to be like, people are always like, well, the, theoretical sharing this is a, is a big driver. Of course it is, but like, you really also need to back up anything with is like a proper revenue source, like a reason that it, that it can be supported for itself. And, and I kind of wonder what’s going to happen with open source because you just have less reason for that to be there, because code is now very easy to make. that was always like, at least for me, it was always like one of the foundational reasons open source is there. It’s this software set that like people really worked very hard on and they wanted to share it because it was valuable. They put that, that was an expert person that did that or many, many expert people. But now what? Like if the AI can build a lot of this stuff trivially, where does that leave, you know, lot of companies that would have normally contributed to open source.

Beau Hamilton (46:58.956)
Right, there’s an existential crisis there. It just makes you question the whole ecosystem, in a sense. I think that the software, especially the SaaS space, been kind of concerned or thinking about that, I think, to say the least. Yeah, and I think a lot of the discussion kind of pivots towards, with AI at least, is the open source models versus the closed models and sort of the different fundamental schools of thought are there. see like China is going with the kind of open source approach for speculative reasons. I think one of it is because they’re not able to get a lot of the high end compute power to compete with some of the closed source models that’s coming out of the West, but also to maybe undercut some of the margins from some of the closed models and what they’re able to do from a revenue standpoint. So there’s so many things. Yeah, it’s wild. But I like to think that, let’s say you adopt an open source LLM, and you’re able to really start tinkering and specializing in on very fine, dedicated use cases that are going to be immune to some of the consolidation you see from the big players from chat, GPT to Claude and the big players out there. So I think there’s still going to be some value for having the smaller guys out there. But it’s interesting to think about, absolutely.

Todd (48:43.456)
Yeah. Yeah, that’s what I say. There’s so many directions inside of there. And for us as a, what’s this called? A fundamental provider of infrastructure. We are in one spot, like less affected, like where people are like, well, I need more cloud. It’s a private, public, whatever. So for us, you could definitely say like, okay, we have a bright future that way but it’s also just navigating, what are some of these services that we’re gonna offer? So currently we are not at like a heavy into GPU company because in many cases, some of the inference that’s out there, it’s being sold at under cost. So like they’re intentionally doing what you describe, but it’s like they might be out there trying to undercut, they’re trying to take a market position. And so they’re willing to do inference on a good model. And many of the open source ones are very good models.

And so they’re willing to do that. So there’s kind of this unusual situation when it comes to that kind of stuff. And so you definitely want to be careful as an infrastructure provider as to like, where does that stop? Like I have seen some of these like really cool inference specific hardware manufacturers that’s Cerebris, if I’m remembering the name right. There’s a set of these, yeah, that are these kind of purpose built that they’ve started to grow up the actual silicon based around the use case, whereas, you know, of course, GPUs were adapt. They’re called GPUs still, I guess, even though they don’t do that anymore. You’re right. But so like that, there’s going to be this interesting kind of shift as this occurred. So for us, we’re excited about that shift. We feel like there’s going to be a nice leveling of this when some of these other AI specific systems come out. know, so AWS has their training, I’m remembering it right.

And GCP, course, they’re tensor stuff, which they’ve been around for a long time and were leaders in that. But as more of that technology gets replicated, some of that obviously patented all that good jazz, but that some of these other companies are now bringing it forward. And so we’re excited to be part of that. hopefully, Sarah Bress, if I remember, they’ll call us and go, hey, you guys are actually uniquely set up so that we could partner with you to be able to place these things that are specific purpose for a customer that they have, that that customer doesn’t want to be in the data center business. So I think for us, it’s like a bright future that we get to. And again, for me, go back to like, we get to get the customers that we’re typically working with are personable enough, they’re willing to talk to us and they’re large enough where we can spend that time with them and explore what those options are. And so for us, like I would definitely say, I’m excited that we’re in that space where we’re benefiting from all these changes. VMware, of course, and Broadcom, they kind of handed us a big old gift also to say like, oh, look, we’re, you know, these mid-sized companies are not going to be able to be on VMware anymore.

So if the VMware people out there who are coming up on their renewal hike, we also see this as a pattern for us. So yeah, I think earlier on, we were talking a little bit about customers that would come to us and where we’re valuable. This is also one where is that the value of the licensing for some of these private systems has gone over the actual value that our client is getting from it or the client that those types of systems are getting from it. And we’re open source. So we don’t make money off of licensing. You can spin up as many VMs as you want and you can spin them up and down and up and down and there’s no license. You’re not even thinking about that, right? Like that’s just a thing that you can do. So yeah, think that’s trying to think of other things like as we’re kind of AI back into the infrastructure discussion here. But yeah.

Beau Hamilton (52:19.99)
Yeah, I think that’s an exciting, optimistic outlook. as an optimist at my core, I like to look at the bright side of things and look at the possibilities. I think you’ve made a strong case for what’s to come and the futures kind of partnerships that are on the horizon. You’ve made a strong case also for just some of the alternatives to the public cloud and the closed cloud systems from the giants, the hyperscale is out there. yeah, I’m just, as we come down to the last set of questions I have for you, I’m just curious if there’s anything else you’d like our listeners to know about OpenMetal specifically or just this broader shift towards the open private cloud.

Todd (53:09.832)
I think it’s often wise for some of the leaders that are trying to plan this out. like, let’s say you’re head of infrastructure, you’re CTO or somebody that is really balancing both financial and like velocity based decisions and then also working with their team. Many times I get I get calls from some of those senior people where I’ll get on the call myself. And sometimes I just say, no, you know what? You’re not at that spot. really, you really should be doing what you’re doing. Maybe you should peel off this and put this over there. or maybe put something with us or, or maybe just, Hey, here’s my recommendations. You know, I’ve got a lot of FinOps knowledge, and, and able to like say, okay, if you’re on here doing this, maybe this is the place that you could look. I often just encourage some of those senior leaders to talk to us many, it’s not really, we’re not like a hard sale company.

So a lot of times I definitely encourage like, it’s a, we’re happy to provide value where we can. And in some cases it’s like, yes, you should get ready to move, but call me back in a year. You’re not where you want to be or hey, scale wise, do what you’re doing there. It’s great. Like usually companies that come to us as they, have this very, I don’t want to call it steady state because sometimes it’s going like this, but it’s going like this on top of, know, it varies by 20,000 a month, but it’s on top of a hundred thousand dollar bill. So like, what’s your problem that you’re trying to solve? You’re not trying to solve the 20,000 variable. You’re trying to solve the a hundred thousand variable. Look at your team. Will this make your life easier? Their life’s easier? Do they get to spend more time on the engineering instead of on the FinOps side of it? Those are often places, but yeah, I usually say like, for me, part of my joy in life, I guess, is learning from other people.

And so I always am like, oh, so somebody at this level with this experience and this type of stuff is calling me to ask me my advice. And I’m like, Ooh, I’m going to get a bunch of good stuff out of that. you know, hopefully, um, you know, listeners take away that it’s a super approachable company. We don’t nickel and dime things. We’re happy to talk. We’re happy to learn. We’re happy to teach, you know, so that this is kind of like the mantra that we live by. And yes, we run our own data centers. Um, if you need it in Singapore, we’re in Singapore. You need it in Europe or in Amsterdam here in the US, of course. And yeah, we just, you know, really like being able to deal with challenges. Sometimes we can just help and send you on your way. And sometimes we’re going to be a fit and we get to be a partner for the next five years, which would be.

Beau Hamilton (55:47.874)
That’s great. Yeah, I think that’s great. That’s great advice. And I think that, you know, every every good partnership just it’s ultimately comes down to communication. And I think that’s, you know, it doesn’t hurt to just reach out and see what you guys can do for customers and vice versa. And for those listening who are interested in this approach that you guys have with this with with the private cloud, what’s the best, you know, maybe first step or place to go and learn more about your company.

Todd (56:22.706)
Yeah, so you could search us like OpenMetal tipping point. That’s an article that kind of talks about like when this might be a good fit for you. We have transparent pricing, so our calculators are on the site so you can see, okay, well, how much is a cloud? How much is a storage cluster? How much is the bare metal? These things are up there. Yeah, we do use AI. We have an AI that’s a home-brewed one that will take your first initial sales chat and then connect you based upon the skillsets that you have, our AI is trained by our own knowledge. So it’ll actually say, well, you’re going to talk to Todd because you asked about this stuff and he knows about that stuff. So we’re going to connect you there. So we use some light AI to be able to get you quickly to the person who actually knows how to address your particular segment or your problem. Yeah, you can always schedule, of course, with us. You can hit us up on the chat system.

There’s another chat system for talking directly to us. can always email the old fashioned one. I wouldn’t send it directly to me. Find me on LinkedIn. I always suggest that’s an easy way. Just look us up, OpenMetal or Todd Robinson something on LinkedIn, two or something. guess I was, that’ll get you there. I think I’m two. I was too slow, I guess, when I registered it. Yeah, Yeah, yeah, I gotta get a hold of that guy.

Beau Hamilton (57:34.776)
Todd Robinson OpenMetal, that should get you there, right?

Beau Hamilton (57:40.418)
I think you’re right. think you’re right. You got to find that Todd Robinson one. But that’s okay. That’s great. I will put all the resources down in the description as well as the SourceForge Podcast article that will coincide with this episode. So listeners, you can go there to learn more and get in contact with Todd, his team and learn more about OpenMetal. There’s, think there’s a lot of, of, of value and insights that you shared with us. think you’ve really, you know, a lot of it has gone over my head in terms of technicalities. And I, and so I’ve got some homework to study up on, but a lot of it, think for the specific, listeners in this space, I think there, you left them very excited, about the what’s, what’s yet to come with some of these shifts we’re seeing and the alternatives and also just like, the seeing, seeing the other hearing about other options, you know, sometimes you work with a with a provider for so long, you you feel almost stuck, you just kind of, you know, that’s just the way things are. It’s the status quo. hard to kind of migrate and shift to other options out there. But I think that you’ve you’ve made a compelling case. So yeah, appreciate everything you shared with us today, Todd.

Todd (58:57.82)
Yeah. Yeah, no, happy to be here. And I know we’ve wandered around in some AI space and everything. yeah, but again, I’m excited that we’re kind of in a nice spot for us. And we’d love to be able to help out customers who have reached their limit with whatever their provider is, or have just kind of fundamentally said, yeah, this is where we want to be. We want to be in a cloud that customized to our workloads. Stop having to force our engineers to know, do these special things for that special cloud and said, let’s change the cloud to fit your stuff. So yeah, no excited. Yeah. Thanks, Bo. And again, sorry for wandering around on some of that stuff, but hopefully it’s useful and, or at least entertaining maybe for your viewers.

Beau Hamilton (59:37.834)
Absolutely. No, it’s all good. love all the different tangents and I think listeners will as well. that’s Todd Robinson, president at OpenMetal. Thanks again and thank you all for listening to the SourceForge podcast. I am your host, Beau Hamilton. Make sure to subscribe to stay up to date with all of our upcoming B2B software related podcasts and I will talk to you in the next one.