AI Security Awareness Training & Simulations: Adaptive Security | SourceForge Podcast, episode #70

By Community Team

Adaptive Security is the first AI-driven cybersecurity training platform designed to prepare employees for sophisticated threats like AI-powered phishing, deepfakes, and multichannel social engineering attacks. With hyper-personalized simulations and an easy-to-use content builder, it helps organizations proactively track and reduce AI-related risks while boosting security awareness across the entire workforce.

In this episode, we speak with Brian Long, CEO & Co-Founder of Adaptive Security. Our host introduces a unique episode featuring a deepfake of himself, created by Adaptive Security. The episode explores the threats posed by deepfakes and how Adaptive Security is addressing these challenges with their next-gen security awareness training and phishing simulation platform. The platform uses deep fakes to educate clients about risks, offering personalized training content. Brian discusses the importance of engaging and relevant security training to combat modern threats, emphasizing the need for realistic simulations and high employee engagement. The conversation also touches on the broader implications of AI and deep fakes in various industries, highlighting the evolving landscape of cybersecurity.

Watch the podcast here:

Listen to audio only here:


Learn more about Adaptive Security.

Interested in appearing on the SourceForge Podcast? Contact us here.


Show Notes

Takeaways

  • Cybersecurity is a relentless battle with numerous attacks happening daily.
  • Many successful cyber attacks go unreported due to companies’ fear of reputational damage.
  • Adaptive Security focuses on making security training engaging and relevant to employees.
  • Personalization in training content leads to higher engagement and effectiveness.
  • The rise of AI-powered threats necessitates innovative security awareness strategies.
  • Employees are curious and interested in learning about AI threats.
  • The volume of attacks is increasing across various communication channels beyond email.
  • AI agents pose potential security risks as they become more integrated into business operations.
  • Companies often keep cyber incidents quiet, impacting the visibility of the issue.
  • The number of public companies has decreased, while private companies have increased, affecting transparency.

Chapters

00:00 – Introduction and Deepfake Demonstration
03:45 – The Reality of Cybersecurity Threats
08:30 – Adaptive Security’s Approach to Training
13:15 – Personalization and Engagement in Security Training
18:00 – The Growing Threat of AI-Powered Attacks
22:45 – Employee Curiosity and Training Effectiveness
27:30 – Expanding Attack Vectors Beyond Email
32:15 – AI Agents and Security Risks
37:00 – The Quiet Impact of Cyber Incidents
41:45 – Trends in Public and Private Companies

Transcript

Beau Hamilton (00:08.998)
Hello everyone and welcome to the SourceForge Podcast, I’m your host, Beau Hamilton, Senior Editor and Multimedia Producer here at SourceForge, the world’s most visited software comparison site. Now, I’m not sure if you’ve noticed, but there’s something different about me today, which is that I am not real. I’m actually a deep fake put together by the folks at Adaptive Security, which is also the focus of today’s episode.

All right, that’s right folks. That was an AI deep fake of yours truly. We wanted to give you a taste of just how convincing some of these deep fakes are and the threats they pose to people and businesses around the world. As it’s, you know, it’s never been easier for bad actors to create these deep fakes and complex social engineering attacks. And I really only see the problem getting worse and more pervasive, unfortunately, as time goes on.

But that’s where Adaptive Security comes in. They offer a next-gen security awareness training and phishing simulation platform that enables security teams to prepare employees for modern threats through highly customized training content that’s personalized by role and access level. And the platform actually leans into these deep fakes to feature company specific open source intelligence and actual deep fakes of company executives all in an effort to educate the client about the risks associated with these threats. So they’re sort of like fighting fire with fire or, or AI with AI, which I think is, is really unique and pretty clever. So joining us to talk more about this endeavor and the cybersecurity awareness training industry is Brian Long, Co-Founder and CEO of Adaptive Security. Brian, welcome to the podcast. Glad you could join us.

Brian Long (01:46.215)
Hey, it’s great to be here and is it the real you now? I’m not sure. You look about the same.

Beau Hamilton (01:50.598)
I mean, if you only you could pinch me and maybe I have like a safe word to give you, but I assure you I’m real. I’m here. I’m with you right now live. That was really fun. I’m glad we did that.

Brian Long (02:05.575)
Yeah, think so too. You know, something I’ve been saying recently, and I’m sure the AI is going to catch up to this, but the AIs are not very good yet at singing or whistling. So if you want to sing or whistle, that would really convince me that it is, in fact, you. I don’t know if you want to sing, though.

Beau Hamilton (02:21.158)
That’s no, I’m gonna hold off on that, but that’s good to know. Yeah, they’re the worst they’re gonna be right now. That’s why it’s like so kind of unsettling to think about where it’s gonna go in the future, right?

Brian Long (02:33.545)
Yep, yep, yep, sounds good.

Beau Hamilton (02:35.674)
But yeah, first of all, I just want to say, I think it’s really neat that you’re what you’re doing to sort of tackle this deep fake problem that has, really serious implications, but also like huge sort of mental health issues, too. When you read about all the different, you know, deep fakes being passed around in schools, it’s almost it’s hitting every industry. It’s becoming a serious problem. So I’m glad you’re working to address it.

Now, I provided a pretty high level overview of your company, but I just love to hear you sort of break down what it is you’re doing over at Adaptive Security and just how you’re helping companies protect their people, their employees from these next gen threats.

Brian Long (03:14.205)
Yeah, so look, we are focused on helping organizations protect themselves and their team from next generation AI threats. So things like deepfakes, voice-based phishing, SMS-based phishing, phishing using generative AI email. The way we do that is two core products. One is a phishing product that simulates AI-powered attacks over those different channels, voice, SMS, email.

And then number two is a training product where we offer advanced AI training on these new types of threats. And the training product is pretty magical. It allows you to create a training on anything as well as to access our extensive library of expert vetted trainings. And the trainings include all sorts of personalized elements like, you know, deep fakes of their execs and interactions and stuff like that. Makes it, makes it something that employees really love. The average employee rating is a 4.8 out of five. So people love it.

Beau Hamilton (04:13.402)
Awesome. Yeah, I know it’s security awareness has kind of been like a checklist thing for companies, but making it more engaging and interactive is a plus. Definitely needed, I would say. Was there a particular moment or experience that inspired you to build and kind of create the Adaptive Security platform?

Brian Long (04:33.917)
Yeah, you know, as you noted, historically, I think security awareness has been viewed as kind of this checklist thing, but it was kind of wild to me because, you know, it’s the thing that everybody in the company sees. You know, it’s the highest visibility thing that you can do in your security organization. And yet most people would acknowledge that their current version of it is not great, right? It’s not covering the newest threats. It’s not engaging for the employees. So how do we make something that’s actually great?

And prior to this, I was running another software company and we had over thousand employees at that company. And when I would get our security trainings, I was blown away that these trainings were as bad as they were because we use some of these legacy vendors where, again, they’re just kind of checking the box on it. And look, we had a number of security incidents and it almost always came through social engineering. like 90% social engineering. And it’s like, how are we not doing a better job of helping our team learn about this and prepare for this.

Beau Hamilton (05:34.246)
Yeah, especially when you’re working in the industry and you’re getting these attacks and kind of talking with other people who have fallen for it and stuff. It’s an issue, it’s a real issue and it’s only getting worse. Were you ever a victim of an AI deepfake yourself? Like have you been on the receiving end of one of these AI powered threats?

Brian Long (05:54.111)
Yeah, so we’ve had people that pretend to be me and reach out to new members of our team over SMS, over email, using elements of things I do and say. I do do a lot of podcasts and things like this, but today folks only need three seconds of audio in order to pretend to be you. And they can go to the LLMs, they can go to ChatGPT, put in your LinkedIn URL and it’ll spit out a tremendous amount of stuff about you. So it’s very easy to, you I think sometimes people, they think about the deepfakes just being your voice on likeness, but now it’s also everything about you that’s out there, right? It’s very easy to get all this open source intelligence about you and use it in real time to make a digital version of you.

Beau Hamilton (06:42.726)
Yeah, I mean, yeah, you can, you can put like job resumes together with just a few, you know, AI prompts and deep fake videos. It’s crazy. It’s, I don’t think, I don’t think society’s quite ready, but you know, here we are in 2025. Now I was just saying, it definitely hits different when you’re on the receiving end of these phishing attacks. I mean, just seeing this, the introduction and like seeing what your team has created for me to, using this introduction of this deep fake of me is, kind of eye-opening. But it’s also just interesting to see the security awareness evolve over the years based on the threats out there. Up until when these AI tools were released, seems like, again, we talked about how the security training has been like a checkbox where you have to make sure you have a secure password, you don’t click random links in your inbox, etcetera, etcetera. But the approach you have here with your simulations, makes employees actually, I think, take this training more seriously, right? How does adaptive help teams move beyond compliance to build a truly resilient and human aware security culture?

Brian Long (07:53.737)
Yeah, look, I think that there’s two core things. One is making training that employees actually love and respect. And that’s a tall order, right? Because people are very busy. They got a lot of other things in their job. And often the last thing they want to do is something that they are forced to do, but doesn’t actually end up delivering what they need to do in their job, right? So I think that what’s something we spend a lot of time on is how do we make training content that A, is very engaging and relevant to the individual, and also B, is very concise, right?

So on the first part, that means personalizing it with deep fakes of their own organization, OSINT about their organization, stuff that just makes it extremely relevant for them and personalized for them. Number two is they are keeping it short. There’s the famous quote that, you know, I didn’t have time to write a short letter, so I wrote a long letter. I think a lot of places in their trainings, they say, look, here’s the big list of stuff, let’s just throw it all in a big training, have it take an hour, and it just wastes everyone’s time. It takes the producers of the training a lot more work to squeeze down a whole bunch of content into five minutes or eight minutes, but you can do it, you just have to spend the time and be smart and be thoughtful about which things you’re gonna hit and how you’re gonna do it. So we spent a lot of time to condense a town and make it very relevant to them.

Beau Hamilton (09:26.468)
Now, I know you’ve worked with a pretty comprehensive list of clients, not to name names, but I know, you know, like Figma, BMC Software, StonePoint Capital, and even the Dallas Mavericks, which now has to worry, I guess, about deepfakes of CooperFlag. What kind of outcomes are you, are your customers seeing after working with you and your team?

Brian Long (09:49.971)
Yeah, so look, we look for a couple different things. A, when we run our simulations, we do want to see a significant drop in the failure rate of those simulations over time. if we’re running an AI simulation, doing an AI powered attack, let’s say 20%, 30% of people fail the first time, we want to see that over time dropping to sub 10%, right? And that’s just applying a significant filter in bringing down the risk of the organization a lot.

And we want those to be realistic, right? We don’t want them to just get trained on spotting these sort of easy to see fake attacks. We want them to actually get good at spotting things that are realistic. So that’s where we’ve focused. And then number two, look, we want to see people that are actually training, taking the trainings, engaging in the trainings. So we want to see a very high completion rate on those trainings. And then we want to see people taking the tests, taking the quizzes related to those trainings and doing well, learning over time.

And from that, we calculate a risk score for each individual at the company to understand how different folks are doing and where security teams should spend their time.

Beau Hamilton (10:58.308)
Okay, is there maybe a surprising lesson that you’ve learned about your market or your customers that sort of changed your approach? Because I imagine you’re constantly refining different strategies and rolling out new sort of features and simulations for your platform to work with clients. But is there anything that kind of stands out that you’ve learned from working with the people you’ve worked with?

Brian Long (11:22.835)
Yeah, look, I think that what’s interesting is that employees, look, at the end of the day, they are curious. They’re interested in these AI power to threats. They’re interested in what’s going on. They do want to protect themselves. So I think that we kind of came into this with the idea that employees really just didn’t want to hear about any of this stuff, and we were going to try to make it a little more interesting for them. But it turns out they do.

One of the biggest things we have requested is that people want to share it with their loved ones, with their family, whoever, and be able to get them access to it. So it is something that think people are really interested in right now.

Beau Hamilton (12:00.484)
Yeah, I think at the very least it makes for good dinner conversation, talking about just the various threats of asking a deep fake version of your grandson asking grandma for money or something. It all relates. It’s not just to people working maybe in the tech industry who have more first-hand knowledge of this subject. It’s going to impact everyone in some way, or form.

Brian Long (12:29.599)
It is, and I think that in the tech industry, we definitely live in a bubble. And in that bubble, that opening video we just showed of you as the deep fake, Folks in the tech industry, they’re like, oh, that’s pretty good. But they’re like, yeah, I’m aware of that. That doesn’t blow my mind. For 99% or whatever it is of the population, that absolutely blows their mind.

They cannot believe that that is possible. And what’s even scarier is that we make that with one single image and we make it in like two minutes with like an intern. So that’s not even like state of the art. State of the art right now is being able to have this type of real time conversation with you and you actually being a deep fake. That’s the latest state of the art right now. That the thing we showed at beginning, that’s pretty mainstream.

Beau Hamilton (13:22.66)
Yeah, yeah, I know. I’m going to be out of a job here pretty soon with this podcast. I know that the Google Notebook LM is kind fun to play with and get podcasts generated just off a few prompts.

Brian Long (13:33.471)
Well, A, it can’t be funny. you know, we gotta, I haven’t seen it be really funny yet. So we gotta focus on the being funny. B, as I mentioned earlier, it doesn’t do the singing and it doesn’t do the whistling. if you can, yeah, if we could focus on that, we could be doing pretty well.

Beau Hamilton (13:43.952)
Okay, well, I can barely whistle. I definitely can’t sing, so I got some work cut out for me. Now, I know, yeah, it’s hard to put like a dollar value just on kind of the importance of taking this security awareness training seriously and incorporating into your company, right? Because at the end of the day, like some of the metrics are just like not, as long as you’re not having a security incident is the good thing, but like, how do you sort of factor in the level of importance of prioritizing an effective security awareness training platform, right?

Brian Long (14:27.295)
Yeah. You know, it’s so weird, but I guess it makes sense. When we talk to folks, you know, we do a lot of conversations, I’d say a thousand in the last year, right?

When you talk to companies, companies are having a very high frequency of these types of attacks and they’re often causing significant financial pain. But most organizations are very quiet about these things happening, right? Because they don’t want to tell anyone, hey, we just got scammed out of a few million dollars. Hey, you know, we just had something that took down a whole system for a long time.

So it’s kind of amazing the frequency with which these things are happening and yet so much of it is kept quiet and isn’t reported unless there’s a very, very clear large scale and ongoing public disruption like the MGM thing. You don’t see as much reporting about it, but it’s having a huge financial impact. There have been a couple of different studies that have been published recently that the average deep fake attack is causing half a million dollars and paying for organizations and things like that. it is hard to put a finger on it, but it’s happening just frequently right now. And unfortunately, companies are dealing with tremendous amounts of problems around it.

Beau Hamilton (15:48.314)
Yeah, I mean, unless you’re, you know, you have private information, PPI, that’s leaked and you have to like make a formal, you know, disclosure about it. Yeah, you don’t really hear about a lot of the behind the scenes stuff going on.

Brian Long (16:00.541)
Yeah, and even they’re like, you know, the ignorance is bliss. They probably have leaked things and they don’t want to know about it, so they don’t know about it and therefore they don’t disclose it. So I think that there’s a tremendous amount of stuff happening there that we don’t hear about.

And even then, that falls within the public company world of which, something that blows my mind too, this is a somewhat tangent that we’re going off on here. But did you know how many public companies there are today compared to how many public companies there were 20 years ago? So there’s like less public companies today than there were 20 years ago, like quite a bit less. But there is a tremendous amount more of private companies, right? And that’s because private equity owns all these private companies, but you’ve got this huge, huge increase in the volume of private companies, whereas the number of public companies is about the same.

Public companies have very different requirements for what they need to report for these types of things, whereas private companies, it’s different. So that’s also a big component of why you don’t hear it as much.

Beau Hamilton (17:07.074)
Yeah, that’s good, I didn’t think about that. Yeah, there’s, there’s less sort of like regulations you have to abide by, right? Huh. Yeah, I know there’s, just, my mind makes me think of, for some reason, all these different acquisitions you see about kind of like the loopholes where they’re, you know, companies are acquiring other companies, but they’re not formally acquiring them because they’re trying to fit in sort of this regulatory framework. They’re just gobbling up everything that matters. Like the brain power and some of the IP without actually formally acquiring companies. It’s kind of, again, going off a little tangent here, it’s little loopholes, things that kind of make it kind of obscure what’s really going on, maybe.

Brian Long (17:51.359)
Yeah, you’re referring to some of the recent acquisitions where they’re buying 49% of the company, then the whole founding team leaves and joins the acquiring company and therefore doesn’t have to go through the M&A approval process through all the government stuff, right?

It’s kind of amazing that it took this long for them to really acknowledge this loophole. But yeah, it is true. And by the way, just on the public-private thing, I know we’re on off a tangent here, but I think it could be interesting for your listeners. In 2025, there’s about 4,600 private companies as of now. I’m sorry, public companies as of now today. In the mid-1990s, there were about 8,000 public companies. So it has fallen off quite a bit.

And then if you like instead you dig did and you said how many companies are there out there that have at least 100 million in revenue? know, Chad GPT is telling me that it’s at least 10,000 and likely far more. So, you know, there’s just a tremendous amount of private companies now that are flying a little under the radar.

Beau Hamilton (18:57.072)
Wow, yeah, so that’s kind of leads me into my next question of sort of these trends, I guess you’re noticing in the industry. What are some of the maybe macro trends you’re seeing emerging in this space that you’re, I guess, working to stay ahead of? I mean, I imagine you’re not working to necessarily stay ahead of the trend of private companies, but in the security landscape, what are you keeping an eye on?

Brian Long (19:21.404)
Yeah, look, I think that we’re seeing the volume of attacks that are occurring outside of email. So over voice phone calls, over SMS, over WhatsApp, over Instagram or LinkedIn, basically anything that you can reach an employee in, but that the company’s not paying attention to. Because the company pays attention to their email.

They’ve got all sorts of tools to see what’s going on in their email. Those attacks are getting more sophisticated for sure, but where we’re seeing, I think, the most potential growth and issue is in all those other channels. And it gets a little hairy for the company, right? Because the company owns their corporate email, but they don’t really own your LinkedIn. They don’t own your personal cell phone. They don’t own your social channels. They don’t own these other places. And yet those might be the places that you’re most vulnerable for that attack to begin. that’s really, I think, where folks should be spending a lot of their time thinking about on the social engineering side is outside of email, what else is going on?

Beau Hamilton (20:29.946)
Yeah, you got to implement that like the zero trust sort of framework. Trust nothing, trust nobody. Now I’ve had a lot of discussions with B2B software companies that are all in on these AI agents. We’ve heard mention of these AI agents for a while now, the last, better half of the last year.

But there’s a lot of upside here with how they’re able to autonomously manage and secure networks, handle ticket requests, and just free up the time spent for engineers to do other tasks. But part of me worries about the possible security risks they pose. Have you seen any cases yet of these AI agents sort of going rogue or being weaponized by bad actors?

Brian Long (21:18.983)
Yeah, for sure. I mean, look, certainly when thinking about human risk security today, we think about highly credentialed people at the top of the list for that risk, whether that’s executives or just people that have a lot of credentials. I think for the agent side of the game, it’s still very nascent. And as a result, most companies are not giving their agents a tremendous amount of credentialing. They’re letting them do some bottom level things, but they’re not giving them the sort of credentialing access initially that would give them the ability to do this stuff. But I do worry a little bit about kind of like a Skynet situation here where, you know, we’re going to be playing with it unplugged for a while. And then one day they’re going to say, OK, you know, let’s give it credentialing. Let’s let it do more of this stuff once we’re comfortable with it. And then kind of, you know, all heck breaks loose.

So I don’t think we’ve seen it happen yet because it is so nascent. I mean, it’s obviously a super hot topic. And yet if you talk to most companies about what they’re actually doing on the agent side, most companies are not doing much today. But they’re automating a customer support center for sure, right? They’re doing stuff like that, but there’s not a lot of credentialing. It’s mostly just automation of customer support. I think we’re going to see it though take over a lot of other elements as we automate those workflows.

Beau Hamilton (22:45.35)
Yeah, it’s still early days, Yeah, I mean, we’ve just seen, you’ve seen the headlines of like, I think it was Claude’s new, one of their new LLMs sort of blackmailing engineers to perform certain tasks, you know? And you just, you think of like, again, where we’re at now and where it’s going, and you can see these threats really starting to have some substance to them.

Brian Long (23:11.517)
Yeah, I do think that the AIs are going to work towards their incentives, right? Whatever someone says, hey, you get points for doing this, that’s what the AI is going to do. So we all work towards our incentives, of course, but people also have morality and they have fear of doing things that could get them in trouble. AI doesn’t have those fears or that morality. It’s just ultimately going to do what it gets its points for. That’s what scares me the most is as these models get really cheap and any 13 year old who’s smart and capable can go out and have these things cause havoc. And there could be no monetary goal. Companies have controls around wire transfers, but they’re often missing a lot of other essential controls that they need.

Beau Hamilton (24:00.006)
Right. Yeah, I know with the, again, the reading about the school, the situation in schools right now with some of deep fakes and what’s going on is it just makes me glad I’m no longer in school. I’m like becoming more of an advocate of, you know, less cell phones, less tech, or at least like less personal tech items in the classroom, which makes me feel just old and I’m questioning things more than ever.

Brian Long (24:25.631)
Did you go to school in the days where there were smartphones in the classroom? Did you have a smartphone in high school?

Beau Hamilton (24:31.462)
I did. Yeah. I was kind of right on the, the, the beginning of side of things with that. Yeah. I remember I had a, um, my first smartphone of Samsung fascinate for Verizon. was back before they had the galaxy, you know, one, two, three, so forth. Um, and yeah, I remember it was like the hot commodity of my, my classmate next to me, um, always wanted to play games on it while we were in, health class. And I’d always like pass it. I’d be like, all right, you’re gonna help me with this homework and I’ll, I’ll pass. You can, you can use my phone for five minutes.

But yeah, it was already a problem then.

Brian Long (25:04.381)
Yeah, I feel lucky that I got my first smartphone after I graduated college. So never had one in school and I think it’s for the benefit of everyone that I didn’t.

Beau Hamilton (25:18.862)
Yeah, well, and it’s, it’s good too, because we’re, we’re seeing sort of, and I, I’m very aware of how it’s rewired. My brain, my attention span is much shorter. my ability to like re recollect, information and, it just, it’s, it’s has some, you know, pros and cons, that’s for sure. So, and then, you know, you add in some of these, these AI things, people are just, instead of actually figuring out and memorizing content for an exam, for example, they’re just figuring out how to use a tool to get the answer. So it’s just a completely different sort of approach. For better or worse.

Brian Long (25:57.119)
Yeah, I mean, look, think that, you got the old argument people make on the, you know, look at calculators, people used to do this stuff, now they can use the calculators to do it. And certainly I think we’re seeing the same thing happen now. I do think that it’s going to give a lot more opportunity for us all to, you know, create art and ultimately be creative and be curious. So I think it gives a lot more agency, but you’re right, it can also fall into a world of shortcuts and, you know, unfortunately not learning as much.

Beau Hamilton (26:29.19)
Yeah, just, think, it’s always a good reminder to, you know, everyone should have sort of their own guardrails and, and, and the kind of rules to follow and, restrictions, right? I mean, no one’s going to tell you how to live your life, but you got to understand some of the pros and cons, both sides of the story and, how it’s going to affect you, so…

All right, Brian, now I’ve got a sort of a more personal question to kind of pick your brain about your industry experience. What’s a common misconception about your industry that you wish more people understood?

Brian Long (26:41.19)
Yeah, so one of the big misconceptions I think about cybersecurity is the volume of attacks that are happening. I think you’ve got a tremendous amount of people that are putting in a lot of effort in order to keep all of us safe every day. A, so there’s a ton of work that’s happening to stop attacks, just relentless attacks happening across every institution every day that a lot of people are hard working on. And then B, unfortunately, there are a lot of successful attacks every day, but nobody talks about it because companies do not want to publicize when they have an attack. It’s only going to make them look bad, make their customer worried, etcetera. We hear about it when there’s a big leak of personally identifiable information, but outside of that, it’s generally kept moot.

So I think that A, we do have a tremendous industry of people, and I do think cybersecurity is unique in that it’s an industry that is inherently good across everything it does, right? You have people that are heroes across this industry, protecting all of us and doing it often quietly in the background without getting a lot of glory for the work that they’re doing. I kind of use the analogy that they’re like the goalkeeper in soccer, but also 2, from time to time, the other side does score a goal and we aren’t hearing about it. And I think that’s something that we just need to make sure as AI gets smarter and smarter, we’re really focused on because it’s the attacks are gonna get better and better at massive scale.

Beau Hamilton (28:12.78)
Yeah, that’s very good insights. What’s something that your competitors are getting wrong, maybe that you’re determined to do differently? Is there anything that maybe comes to mind?

Brian Long (28:22.01)
Yeah, so look, Adaptive offers security awareness training. So we want to train organizations on how to be more secure. I think that other competitive offerings have really just focused on checking the box on training. They have thought, hey, the employee at the end of the day doesn’t really care about the content, so it’s fine if it’s boring. No one’s going to pay attention. Let’s just have it be what it is.

For us, we’re really focused on how do we make content that’s very personal, very relevant to them so that they actually want to learn from it and will get something from it. So it is a hard task, but that’s really where we focused to make our product magical. And today, our average rating from employees is a 4.8 out of 5. So employees love taking our trainings, and we really want to do more and more to make our trainings extremely personal and relevant to them.

Beau Hamilton (29:18.65)
And then I want to ask too, so you’ve obviously been in the industry for a long time, have extensive background working with Adaptive and then in previous roles. What’s a small change that your company has made that’s ended up having a really big impact?

Brian Long (29:37.074)
Yeah, so I mean, for me, one of the smallest changes that I made came a few years ago where I was very lucky to have a executive coach who encouraged me to just focus on the top three things that I want to get done, that I want myself to get done, and also for the rest of my team to have a very clear list of the top three things that they want to get done. I think we have a tendency to try to juggle 20 different balls.

We get pulled in different directions and we do all sorts of different things. And as a result, we don’t make a lot of headway on the most important things. We make little bits of headway across 20 different things. So the biggest change that I’ve made is listing out what those top things are and blocking out time each day and just doing those top high leverage things and getting those things done. I think that’s what some of the best folks do is they find a way to just focus on what matters the most and to let the rest of the noise go away. Understanding that, you know what, there’s going to be some people out there that are unhappy that you’re not doing the things they want you to do because you’re doing the things you know you need to do.

Beau Hamilton (30:42.17)
Very well said. That’s, that’s advice I’m going to take home with me because I am the quintessential example of a millennial with, with focusing issues. And I’m constantly doing a bunch of different things. And I’m always losing track of like the main three things that I should really be focused on. and as a result, you know, you kind of let that sort of that quality, the quality kind of goes down, but also, just the emphasis on, the level of importance and then, and the end result, the deliverable at the end of the day just doesn’t get done. So great. I appreciate all the insights there.

On that note, I want to circle back onto your platform and just kind of, again, talk about some of the upcoming maybe features and things you have planned. Do you have any exciting simulation scenarios or platform upgrades that you’re excited to talk about.

Brian Long (31:34.037)
Yeah, yeah. So look, one of our big focuses this year has been with this idea that we wanted to be able to let our customer, our user, create a training about anything instantly. So be able to say, hey, I just saw that news article about the Marco Rubio deepfake. I want to do a training on that. I want it to be personalized to my finance department and I wanted to use all my branding and all my corporate policies, make it relevant to us. And I want it to be four minutes long. So we made this magical new tool that’s called our Gen.ai Content Creator. And it allows you to make a training like that literally in two or three minutes. You just, you’re able to put in the context, say I want to train on this, here’s a link to the article, here’s some additional context on our policies.

Here’s the audience, here’s the length, hit go, and it makes a training on that. And then, you know, it’s pretty amazing. You can then edit the training, you can change the audio narration, you can change the imagery, all the imagery is created based on, it’s trained on your other brand imagery. So it makes it kind of look natural to your business. It’s got images and animations and all that sort of stuff. So it’s pretty marvelous what it can do.

And as we see the models get better and we see the tech get better, I think we’re only going to see these things get better and better and be more personalized. And we’ll be at a place in the coming years where it could really deliver that one-to-one level of training.

Beau Hamilton (33:09.712)
Yeah, I mean, the possibilities are endless, So when you’re so currently, or I guess before you had this feature, you had sort of predefined scenarios that you were approaching companies with.

Brian Long (33:23.987)
Yeah, so look, we’ve got a lot of predefined stuff and it’s been expert vetted and it ensures that it’s got everything that you need to have and all that sort of stuff. And they’re great and they’re phenomenal. But what’s kind of cool about our platform is that you can come in and say, look, I like 90% of what you did here on explaining deepfakes, but I want to make this a little more specific to my company. So I’m going to come in and make these changes. So you can actually go in and make the changes just like you would make changes in Google slide deck or something like that.

So you can make all the changes really quickly and it looks fabulous and the consumer, the end employee loves it. So you could start with our base like that and use our base or you could build something completely from scratch and our AI builder will make the whole thing for you.

Beau Hamilton (34:10.75)
That’s exciting, yeah. Seems like a natural sort of progression with what you guys have.

Brian Long (34:12.199)
Yeah, it’s just a much better way to make training personalized, individualized.

Beau Hamilton (34:17.57)
Awesome. Well, if security leaders, those interested are listening and want to learn more, maybe run a simulation of their own, what’s the best way to get started?

Brian Long (34:28.913)
Yeah, so just come visit adaptivesecurity.com and there’s a button there that you can get a demo. You can also, there’s a self-guided product tour on our site where you can take a quick tour of the product without even having to talk to a salesperson and kind of walk through it and see what’s in there. And it’s pretty fantastic. And then if you do our demo, it pulls in all sorts of elements about your company. It makes deep takes of your executives that show up in the trainings and in the simulations.

It’s pretty wild, so come check it out.

Beau Hamilton (35:05.454)
Very cool. Yeah, I’ll have to give it a spin myself. Well, thank you for all the insights you shared. It’s been a pleasure. again, with entertaining me with all these different tangents we went on, it was a lot of fun.

Brian Long (35:15.161)
I had fun too, although next time I’m gonna get you singing. We’re gonna get you there. We’ll try.

Beau Hamilton (35:22.646)
Yeah, singing and whistling and maybe I’ll even start thinking about maybe some safe words in case I ever get, you know, impersonated and need to validate myself.

Brian Long (35:36.507)
By the next time, we might even just be a deep fake and you the whole interview, we’ll say.

Beau Hamilton (35:49.214)
That’s very true. That is very, very true. All right. Well, that’s Brian Long, Co-Founder and CEO of Adaptive Security. Thanks again. I appreciate all your time.

Brian Long (35:48.869)
Hey, thanks for having me on.
Beau Hamilton (35:50.214)
Thank you all for listening to the SourceForge Podcast. I’m your host, Beau Hamilton. Make sure to subscribe to stay up to date with all of our upcoming B2B software related podcasts. I will talk to you in the next one.