Frontegg is a developer-first customer identity and access management (CIAM) platform that makes it effortless to build secure, enterprise-ready authentication, authorization, and user management into any SaaS product. With multi-tenant support, powerful APIs, and <1ms auth checks, Frontegg empowers teams to scale faster, stay compliant, and deliver seamless user experiences across apps.
In this episode, we speak with Sagi Rodin, Co-Founder and CEO of Frontegg, about the evolving landscape of AI agents and the unique security challenges they present. The conversation delves into the risks associated with autonomous AI systems, including prompt injection and data leakage, and emphasizes the importance of robust identity management and auditing practices. Sagi shares insights on the Model Context Protocol (MCP) and its role in securing AI interactions, as well as common pitfalls startups face in managing security. The discussion concludes with predictions for the future of identity and AI security, highlighting the need for adaptability and proactive measures in a rapidly changing environment.
Watch the podcast here:
Listen to audio only here:


Learn more about Frontegg.
Interested in appearing on the SourceForge Podcast? Contact us here.
Show Notes
Takeaways
- AI agents operate autonomously, presenting unique security challenges.
- Prompt injection and data leakage are significant risks.
- Monitoring AI agents requires dynamic auditing and logging.
- Early adopters must prioritize identity management and authorization.
- MCP serves as a transport layer but needs additional security measures.
- Basic authentication must be tenant-aware to avoid security issues.
- Security is essential for enabling innovation in AI applications.
- Trusting instincts and building a strong team are crucial for success.
- The future of SaaS will involve more natural language interactions.
- Proactive security measures are necessary to protect user data.
Chapters
00:00 – Introduction to AI Agents and Security
02:59 – Understanding the Risks of Agentic AI
05:53 – Monitoring and Auditing AI Agent Activities
08:50 – Security Must-Haves for Early Adopters
12:03 – The Role of Model Context Protocol (MCP)
14:58 – Adoption of MCP and Competitive Landscape
17:51 – Common Mistakes in Identity and Security Management
21:02 – Future of Identity and AI Security
23:48 – Advice for Entrepreneurs and Innovators
Transcript
Beau Hamilton (00:00.802)
Hello everyone. And welcome to the SourceForge Podcast. I’m your host, Beau Hamilton, senior editor and multimedia producer here at source forge, the world’s most visited software comparison site where B2B software buyers compare and find business software solutions. In today’s episode, I’m joined by a Sagi Rodin Co-Founder and CEO of Frontegg, company that is rethinking identity and access for modern SAS. And more recently for this new agentic AI era, we all find ourselves entering.
That’s this almost dystopian future where AI systems act as autonomous agents and they make decisions, they take actions and interact with apps and user data all on their own. And so that opens up a bunch of exciting possibilities, but also a whole new wave of security risks. And that’s really where our conversation will set around today. So we’ll talk about what makes securing AI agents different from traditional users the biggest risks companies need to prepare for from day one and just how Frontegg is helping keep these agents in check without slowing innovation. So it’s going to be a little bit of a mix of technical insight and big picture perspective. So let’s just get right into it. Sagi, welcome to the podcast. Glad you could join us here.
Sagi Rodin (01:11.51)
Great to be here. Thanks for having me. Exciting talk week to talk about for sure.
Beau Hamilton (01:15.84)
Yeah, for sure. There’s a lot of stuff we’re going to cover today. I just want to get right into it. I know it’s obviously important to secure our apps, our user accounts. Most of us have passwords for our phones and our work laptops, hopefully. And we use authenticator apps when signing into things like Slack or Teams. What’s different about securing AI agents compared to traditional apps or users?
Sagi Rodin (01:41.1)
Yeah, agents are not just screens on top of APIs, right? They plan, they choose their tools and act on users’ behalf. And in the near future, they will act on their own behalf. So I think that identity turns into chain of accountability and user to agent, to tool, to data, to outcome at the end of the day you will authorize steps, not just the endpoints. And you scope everything to a tenant, who is your customer, which kind of resource they want to access to. And also, obviously, we need the guardrails that look at the intent, not only on the tokens. So it’s very hard to predict how exactly your APIs were used.
Just like in the era of traditional SAS, it was very easy. You can build the UI, and you had full control over how everything will be used. It’s not the case anymore. So I think that security becomes more natural in a way, just like natural language. And definitely, us as builders, have to adapt to that.
Beau Hamilton (03:01.474)
Yeah, totally. So knowing the basic premise of AI agents, just how they’re designed to work autonomously to carry out specific tasks, obviously one could see the potential opportunities for where things go wrong. So can you just kind of elaborate on some of these risks that have come up with agentic AI? What are some of the negatives you’ve seen associated with them?
Sagi Rodin (03:24.364)
Right. So I would say that the usual suspects are obviously prompt injection, tool misuse, data leakage, and those long tail edge cases that you just didn’t predict. And I think that we’re on these defense strategies in depth. I think that inputs and outputs are checked for relevance and safety. PII gets filtered, write actions definitely carry higher friction than just reading data. But also, when you open up the data to those AI agents, they will definitely try to please the users that asked for the information. And thus, you have to make sure that there are no loopholes at all that are being opened.
Now, it might sound kind of like, yeah, but I had to do that before, right? But it’s a bit different because before we kind of had the assumption that we’re protecting only against rogue players, right? Against hackers. But now we have those agents that basically they might have a good intent, but there are pleasing the ones that ask for the information. So they will find any path to kind of bypass through the guardrails that you had in place. So we have to make sure that it’s really, really tight, even more than before.
Beau Hamilton (04:56.192)
Yeah, it’s really interesting to try to visualize the sort of guardrails you have to have in place. And then just think of just the general misuse of agents. You mentioned that some of the specific prompt injection attacks that can wreak havoc, which is when someone basically sneaks malicious instructions into the input for the AI and trick it into ignoring the original task or security rules, I suppose. I just want to continue to like extrapolate, what, what some of the specific examples you’ve seen rogue agents be capable of, anything that like comes to mind that you’ve seen already in the industry here in 2025, because I know it’s, it’s, it’s AI agents are just starting to sort of come on the scene, and scale, so to speak. mean, they’ve been teased for the last couple of years, but now we’re really starting to see them deployed. So have we already seen any, maybe wide scale of attacks or issues associated with them.
Sagi Rodin (05:56.823)
Yeah, for sure. So we had several issues just in the last few months. Obviously, we all heard of this kind of application of Replit where without a bad kind of intention, there was a request that was sent to manipulate big chunks of data.
And the agent found a way to do that. And basically there was kind of like this mess happening because of that. So we definitely see that happening. We see data that comes, I’m asking a question and then the response comes out to me when I’m not supposed to really see it. So definitely we see things like that happen. And I just came back from the East coast where we had this a kind of conference talking obviously about AI and how things are changed. And people are talking about, you know, they want to enable things, but they keep hearing about just, you know, data gets revealed and, you know, people getting bits of information that they’re not supposed to see. And they’re just not proceeding with really kind of the adoption of AI agents.
And I think that this is something that will happen gradually. If I’m comparing that to six months ago, or definitely a year ago when we were talking about things that were just the beginning, and people were more reluctant to adopt AI into their products, we’re seeing definitely a progress. But those events that are happening are definitely affecting the adoption.
Beau Hamilton (07:43.36)
Yeah, well, it’s, mean, everyone has, it seems like they have to, they feel like they have to incorporate AI into their product or service in some way, or form. And so it’s definitely important to be proactive with security so that you can kind of stay ahead of the potential threats. And then it’s also just interesting to see kind of, you know, I can see where this is going to be kind of the next big security problem, so to speak, just with the more AI agents entering the scene. I mean, we’ve seen like,
I feel like the recent example is just ransomware and how prevalent that is. And then unsecured database buckets, leaking data and whatnot. So there’s all these different sort of trends based off what’s emerging. So it’s good to see that you’re focused on securing and being proactive and helping a lot of these other startups, but also establish legacy companies secure their AI agents when a company launches an agent and makes it go live, so to speak, how do you like keep track of what the agents are doing and catch anything unusual? Is there like a activity log you’re monitoring?
Sagi Rodin (08:55.958)
Yes, I would say that we treat agents kind of like microservices with receipts. Every tool call is a structured event. We know who acted and which tenant, you know, we call tenants the end customers basically. Which scope did they use? Which resource did they try to use? Why they try to use it, right? So the abilities that we have now are not only to look kind of the raw event, but we can also find the intents.
And obviously, to do that, we also use some of the LLM capabilities. And I think that those kind of streams feed audit logs that you can obviously replay afterwards. You can set up alerts. We definitely try to also flag anomalies, like unexpected pool combos just maybe a step back to those who don’t know. So agents, talk with tools basically, right? They don’t talk to APIs, at least not in the same kind of manner that we’re used to. And we want to make sure that we understand kind of how those tools are being called, how they’re being used, which tools kind of convert to which API calls and what sequences. So we want to make sure that if there are kind of unusual timing to call some tools, there’s different spikes, you know, sensitivity scopes, or just repeated kind of near misses that hit maybe that there’s an injection or something that is off.
And I think that when something feels off, we can definitely, you know, free the freeze this agent, we can double check the user that the agent is acting on behalf of, we can maybe dynamically downgrade the scopes of the user. So we have a lot of possibilities, but the thing is that it has to be very dynamic, right? It has to kind of always keep up to the pace of the agent. And again, this is a bit different than what we’re used to when everything was kind of static, predefined, and maybe easier, I would say, to follow. Now we have to move fast. So yeah, you know, auditing, definitely important, but I would say that acting, of reacting very fast and using kind of the, the wisdom of the LLMs to also provide these kind of on time protections is a huge capability. And we’re definitely trying to use that.
Beau Hamilton (11:38.932)
Interesting. Yeah. I mean, you have a lot of guardrails, but I mean, the potential for these agents to act nefariously can, well, and the speed at which they can do so is, is kind of alarming. So yeah, you have to act quickly. It makes me think I’ve always had this sort of, dystopian, like what if scenario of this, you know, super computer where maybe somebody is working on, this advanced, project that’s kind of like off the grid, but then they flip the switch, so to speak, and send it out to the masses and connected to the internet and it’s able to wreak all this havoc so quickly before anyone can kind of catch up and quell the problems associated with it. I feel like that’s kind of an example here with some of these AI agents. Granted, it’s early days. You have the guardrails in place. So you have some of these flags monitoring what’s going on.
So, okay, so for companies rolling out this tech, especially early adopters, right? There’s always sort of that question of like, what absolutely has to be in place on day one from a security standpoint? So what are some of the biggest security must haves that these early adopter companies should think about from day one when they’re first rolling out an AI agent?
Sagi Rodin (12:57.238)
Yeah, so great question. I would treat first of all identity kind of as the foundational piece of it. You know, I might be biased, but we definitely have a lot of conversation in the industry. And I think that you know, a lot of those kind of foundational identity pieces of authentication, know, hands understanding, who is basically accessing your resources, accessing your app, and then authorization deciding what they can or cannot do within your application, auditing, the same kind of foundational pieces. If I need to kind of break it into quick, you know, kind of tips that I can provide.
So first of all, at this stage, definitely we want to bind any agent to a user. And if it’s a B2B application, which a lot of our customers are B2B, so it’s relevant to them, not only to a user, but to a tenant. So we don’t want to see those orphaned of actors in our system. I think that this will definitely evolve. We are already working on kind of ways to identify agents and to provide, you know, multi-factor authentication for agents that could act on their own behalf and were not sent by, you know, a known user or a customer of ours. But I think that if you still work in progress, and I would say that for now, I would definitely make sure that every agent we know who it acts on behalf of. The second thing I would maybe say fine-grade kind of entitlements per each tool, per each resource, per each action.
We want it fine-grained. We really want to understand who is trying to access what, whether they can do that. So definitely an important piece of it. Third thing, maybe use short-lived scoped credentials and definitely try to also kind of isolate the environment. So we definitely don’t want to see a leakage of data between some of your tenants. Super, super important.
Sagi Rodin (15:06.898)
And maybe the last thing that I would say, obviously, we have the auditing part. But in those days of users starting to access applications through the likes of ChedGPT, Cloud, Gemini, through connectors to those SaaS applications, we definitely want to always make sure that we know who is the user who is working behind the ChedGPT, for example.
For important actions, we want to make sure that we have guardrails like step up, for example, right? So if they’re trying, if they connected to their cloud account, I don’t know, like a month ago, and now they’re trying to do a transaction of a million dollar inside our system, we definitely want to step up on the response and make sure that we’re still understand who is the user, that they’re still connected, that they can validate their identity. So this kind of you know, human in the loop or human approvals flows are super important as well.
Beau Hamilton (16:10.848)
Yeah, I’ve heard that phrase, a human in the loop. And those are great tips. And I’m hoping that industry professionals listening right now are taking notes on that, because I think those are good insights. Another thing that I’ve heard people talking about and I’ve came across when researching kind of the more behind the scenes of the security aspect with AI agents is this MCP or the model context protocol, which is a, and correct me if I’m wrong, but it’s an emerging standard designed to give AI agents safe, structured access to external tools, APIs, and just data in general. And on the surface, sounds like it could solve the access problem. So I’m just curious, with MCP available, why wouldn’t you just use that to open up your SaaS products API to agents?
Sagi Rodin (17:03.628)
Yeah, so MCP is great. think that it was definitely needed. remember 18 months ago, we were thinking, OK, we’ll connect agents to APIs. But then I was having a lot of conversations where we realized that this is just not the way that agents used to interact. you need something that is a translator layer for the agents.
And I think that this is what MCP does. Basically, we should think of it as a layer that sits on top of your APIs and make sure that agents that try to connect to your system can do that in a way that it’s convenient for them, right? And this is through kind of conversion to tools. So it’s a great kind of transport layer, I would say. And there are some security kind of principles that are being introduced over the MCP. But at the end of the day, we need to remember, you know, maybe getting back to my previous answer, that it does not decide, you know, who can do what on which tenant on which limits, how do we prove it later, how we troubleshoot that. And so you still need, you know, identity authorization, secret management, quotas, audit logs. And I think that’s what basically we’re trying to do here. So we’re trying to take this MCP. We’re not definitely trying to reinvent the wheel. think that that’s a great wheel. So no need to reinvent it. But for some use cases, especially in B2B or fast-pacing B2C applications, you definitely need to kind of harden that with the principles that I stated. So we need to make sure that it’s tight.
Another thing that I would say is that today we have MCP, but in the future we might have different kind of agentic layer interfaces. If we take back 20 years ago, we had SOAP, right? And then we had REST APIs introduced and, know, in GraphQL. So definitely there’s going to be more standards. And I think that it’s important to kind of treat those layers as, you know, as interfaces and abstracted conversion interfaces so that we can apply those tools later to any interface that might come our way.
Beau Hamilton (19:40.544)
So it’s not the sort of end all be all like it comprehensive, like you adopt this protocol and you’re good to go. covers all your security basis, right? It’s a good, it’s still great to have, but it’s not like you still have to employ all these other kind of security best practices that you’ve mentioned.
Sagi Rodin (19:58.4)
Exactly. I would take control, especially at this stage where the standards shift so fast. I would try to make sure that the principles that I’m used to and the new principles that kind of come our way, just like we stated, I have full control over those. And the ones that handle it are kind of the players that are specializing in that.
Beau Hamilton (20:20.608)
Yeah. So have you seen a wide adoption of this protocol or is it still fairly niche? then have you also seen some of maybe the legacy identity players like Okta or PING identity adopt this approach to security? I’m just kind of curious to see how your approach differs from some of your competitors and kind of what everyone else is doing in regards to security.
Sagi Rodin (20:44.078)
Yeah, so I would say that definitely MCP is being adopted. And definitely we see the identity players, at least the leading identity players, the responsibility of understanding that they are kind of in charge of this foundational piece of securing this new way to access SaaS applications. I think that know,
Frontend has a bit of a different approach in some of the manners. First of all, we took from early days, kind of the multi-tenancy aspect as a first-class citizen in our system. So anything that you can think of is basically configurable on a tenant basis, on a customer’s basis. A lot of our customers are serious, kind of heavy, multi-product, multi, I would say different types of customers that they serve. A lot of them do PLG. On the other hand, they also do enterprise. And each one of those kind of types of customers requires different configurations, different boardings, different compliance. So I think that that’s definitely a different approach that also makes us think a lot of things like the tenancy data segregation, where we make sure that things don’t leak between tenants. So yeah, those are some of the things that are different.
Also, I think that, you know, we took the authorization piece in the last six years in a very kind of different manner. We think of it as a holistic way to basically control what users can do in your app based on a lot of parameters. You know, there’s security posture and their roles. So, you know, imposing kind of role-based access control, but also dynamic kind of scoping, understanding what is their intent. And we’ve been doing that for the last few years. Now we’re trying to take the same kind of dynamic concepts and apply them to agents as well. So I think that a lot of our competitors are doing a great job, but they’re focusing on the authentication piece where we’re kind of trying to take it to a really kind of real life examples of how we can help our customers really adapt to this world because most of them, you we asked about the adoption and usage of MCP. Most of them already built MCPs, right? Their customers asked for it. They built internal kind of agents and co-pilots. So they had to build this layer. But when we kind of dive into that with them, we see that it’s not really protected and there’s a lot of concept that they didn’t think about. So they kind of just build something that is very basic. And I think that once we will see those products being actually released to the public and becoming enterprise great, we will also need obviously enterprise great security and identity management over them.
Beau Hamilton (23:56.246)
Hmm. Yeah, that’s great. Those are, those are great points. I, I, I want to ask, so obviously competition is heating up. You you mentioned a lot of these new, new players entering the market, not only, you know, releasing AI agents of their own, but also kind of trying to tackle the security issue as well. which is obviously the same problem you guys are trying to tackle. and you, it sounds like based off your previous answer, you have, you’re doing quite a bit differently, things differently than your competitors.
And it’s great to see some of this sort of like uniform. Everyone’s trying to maybe adopt or incorporate the MCP protocol. So I’m curious though, like specifically for startups or SaaS builders listening, what are some common mistakes you see when teams try to handle the identity and security themselves? Is there anything that like any sort of red flags you see when they try to tackle this security aspect all by themselves?
Sagi Rodin (24:53.362)
Yeah, think it all depends, you know, building a startup, it’s really hard to go back and try to, you know, track the mistakes that you did. But I will say that every step requires different kind of measures, right? So if you’re just launching, it’s fine to take something that is basic and to run very quickly and release the first kind of MVPs, have the first customers. I think that what we see is that once you kind of evolve to the stage where you’re starting to see more serious kind of customers coming your way, this is where you also need to uplift kind of your security or auditing capabilities. You need to go through compliance. You need to prove that, you know, that everything is, kind of great and tidy with your system.
So, so I would say that, I wouldn’t call it a mistake, but definitely basic authentication that is not tenant aware, that does not treat entitlements as, it’s a kind of this, foundational piece, of the, of the system, are definitely things that, you know, it just will be harder to, to sell your product, eventually and to kind of go up the ladder of logos and customers. And this is what I would do. I would definitely kind of try to treat the system at every stage and add capabilities, add enterprise grade capabilities so that you can be also rest assured kind of that when you’re sending your product and now to companies that will use it through agents,
You can sleep at night knowing that their data is protected. can have a conversation with their CISOs and really show your identity and security posture and be proud of it as well.
Beau Hamilton (26:52.022)
Yeah, it’s just worth underscoring. Like you really have to take it seriously, the security aspect. Cause I mean, when you’re, when you’re launching these, these services, I mean, you can really make or break a business if you have something go wrong. So you’ve got to be proactive and you’ve got to maintain it after, after it’s deployed, just to make sure nothing goes wrong and have all these guardrails in place. Okay. So, so obviously it’s just, there’s a lot to look at, to see how this industry develops over the coming months, let alone the coming years. I mean, it’s just moving so fast, it’s hard to even predict where it will be in the next few months even. But I wanna pose the question anyway. I always love hearing your insight where the industry is going, because you’ve been working in industry for so long. If we were to look one to two years ahead, if we can even look that far ahead, how do you see identity and AI security evolving?
Sagi Rodin (27:47.025)
Yeah, so I think that identity and AI security or security overall is an enabler, right? At the end of the day, we’re enabling value, we’re enabling faster innovation, we’re enabling great companies with great ideas on how to help end users. And I like to think of it that way, that we need to continue enabling that. so I think that to answer this question, we have to kind of take a zoom out approach and look at how SaaS applications are going to be consumed over the next few years. And I think that this is going to change tremendously.
And we’re already starting to see that happen because, you know, point and click solutions are going to be a thing of the past. Accessing different layers within your product, you know, like APIs, for example, those will disappear in a few years, you will have access to data that you store this, and customers will be able to do basically whatever requests they need or send their agents into your app to do any action that they need to basically get value that they need at this point. So no more, the product manager decided on a certain flow, thought that that would be the best kind of user experience, had some usability test, release that, and then you would have like 60% of usage of customers that are happy with that flow.
I think that SaaS applications in general will be more kind of value describe and done approach versus the kind of traditional point and click. If that will happen, basically, and we’re already seeing some of the applications of the AI native applications that are being built this way, everything is going to be through a natural language conversation where the end user just says, okay, I need that, I need that, I need that from that app, I need that from the second app, connect the data and everything just works. So the security and the identity pieces also have to kind of keep track of these kinds of experiences. And we want, again, like those kind of dynamic tool management, seeing what requests are being made on which tool and then seeing if that’s legit, seeing what tools trying to connect with other tools, seeing what interactions users are asking me as the app owner to have with other applications, make sure that this data connects in a good way to provide an answer and value to our customers. So different type of SaaS, definitely different or I would say same principle, but elevated and adapted way to managing the identities and the security over those new type of applications.
Beau Hamilton (30:46.626)
Yeah, it’s fascinating to think about just where this is going with like from a visual standpoint. And I think every, every consumer can, can visualize this. It’s just like the apps on your phone are essentially going to be kind of a thing in the past in a way, like they’re going to be condensed and integrated into your, favorite, LLM of choice. then like you were saying, you know, you can just ask it to do whatever you want. And, and because it’s integrated with these other tools and services, it’ll you, without leaving your chatbot, so to speak, it’ll execute the commands you want it to. it’s just crazy to think about because for so long, we’ve been obsessed with all these new apps and the traditional smartphone sort of interface.
And so to think that just in the next year or two, that’s going to be almost a thing of the past. It’s going to shift into just auditory kind of commands, voice commands and text, AI text prompts basically to do whatever you wanted to do and then have agents maybe go out and post on social media or whatever it might be. It’s interesting, it’s crazy and that opens up whole lot of, again, security issues to say the least and privacy issues. Okay, so you’ve obviously shared a lot of really interesting insights with us. I appreciate that and lot to think about. Before I let you go, I want to ask you a couple of questions that just give you a chance to share some of your knowledge you’ve accumulated throughout your career and maybe share some things you would do differently. So if you could go back to the start of your career, what is one piece of advice you’d give yourself knowing what you know now?
Sagi Rodin (32:28.479)
Yeah, so, yeah, I would love to answer that. I think at the end of the day, I would definitely say to any founder or innovator or entrepreneur to just trust their instincts at most of the times. I think that the combination of trusting your own instincts and working with great people around you will just make sure that you will succeed at 80 % of the times at least. For the other 20%, there’s always the industry and what is happening and the downturns and things like that. But I think that surrounding yourself with great people that you can trust and trusting your instincts is something that, I would definitely kind of, put focus on, make sure that, that I apply. I would say, you know, pick your battles, always hire people that, you know, kind of raise the bar, and, and make sure that you’re always, kind of doing things that provide value to the organization and essentially provide value to your customers. So those are the tips that I like to give to colleagues that ask for my advice.
Beau Hamilton (33:56.789)
Yeah. I think that’s great advice. And I love the bringing up and having a good team around you, good people, especially in regards to this conversation around autonomous AI agents. It’s really the people. You can’t forget about having a person in the loop and then also just a good, capable team to work alongside. That helps everything. And then I’m curious. Sorry, go ahead.
Sagi Rodin (34:21.774)
Yep. I just want to say that’s a really great point, right? Because I think that at the end of the day, the teams today are leaner, they’re meaner, they’re more focused than definitely one person today, for example, the marketing but on the engineering as well, product definitely can do like 10X, maybe even 100X of what was, you know, what we could have done kind of in this position, you know, even a year or two ago, right. But at the end of the day, you also need to surround yourself with strategic people that will help you kind of to navigate through all of that and have control over the, you know, the whole organization. And this is something where I think in the upcoming at least decade, it will be a human thing and you will need to build this, you know, incredible team that you can pace kind of forward in this crazy, crazy, you know, era that we’re about to see about the witness already starting to witness.
Beau Hamilton (35:29.356)
Totally, yeah, and just having the right people to keep you focused too, because I mean, there’s so much we’re capable of doing and maybe want to do, especially from a business standpoint of incorporating all these different features and capabilities. But having kind of a focused set of goals to work towards is always important. yeah, and then just with the uncertainty, it’s always good to have just a good reliable crew to fall back on and keep you motivated. So I’m curious, you know, looking back again, there’s probably a bunch of different resources you’ve, you’ve, leaned on it and used to, to, fine tune your, your skills and whatnot, but is there a particular maybe book or a podcast or other resource that comes to mind that’s had like a major impact on your life or, or your specific leadership leadership style?
Sagi Rodin (36:22.38)
Yeah, so you know, if you come to mind, I would definitely say that maybe I would choose, you know, one resource that in this era, I think that every entrepreneur have to go back to probably read it, but have to go back to and this is only the paradise arrived because I think that the book, you know, and the growth like, you know, great, great book that talks about inflection points. And, you know, kind of moving before the market forces you to do that. And I think this is exactly the era that we’re witnessing now. I think that agentic AI kind of changes how software is being used, how it must be secured. And I think that this kind of healthy paranoia in this sense means that we have to instrument everything, question the defaults, and be ready to move fast, to adapt fast to everything that is happening. And I think that that mindset is steering, you know, how we kind of build front-end AI, for example, and how we’re shipping new things in the upcoming weeks. A lot of this mindset of, you know, this is a great opportunity to provide more value in a faster way to our customers. So, you know, let’s bring it on.
Beau Hamilton (37:45.548)
Totally. I love it. That’s a great message. I’ll have to, I’ll give some of those, give that book a look. I just, I appreciate all the insights. It’s really just, this is a brilliant insightful conversation. Thank you for your time, Saji. And for those interested in learning more about Frontegg, maybe they want to get in contact with your team. Where should they go? Where would you send them?
Sagi Rodin (38:09.592)
You know, we’re completely inbound, proud to be. So just go to our website, open an account. It’s very fast. Today people do that obviously through cursor as well. So they just type, you know, add authentication to my app. And it does that in a moment. Trust me, I did it myself like two weeks ago, just to see how, how we’re adapted really to, this new era. This is the way, but obviously, you know, I’m also available if you want to chat with me, saggy at frontegg.com. I would love to have a coffee if you’re in the States or when I’m visiting Israel. Always fun. So yeah, let’s talk.
Beau Hamilton (38:45.93)
Awesome. Perfect. All right. That’s Sagi Rodin, Co-founder and CEO of Frontegg. Thanks again for everything you shared with us. I really appreciate it.
Sagi Rodin (38:55.576)
Thank you so much.
Beau Hamilton (38:56.95)
Thank you all for listening to the SourceForge Podcast. I’m your host, Beau Hamilton. Make sure to subscribe to stay up to date with all for upcoming B2B software related podcasts. I will talk to you in the next one.