AI-Powered Test Automation Software: MuukTest | SourceForge Podcast, episode #102

By Community Team

MuukTest delivers fully managed, AI-accelerated software testing that gives you the impact of a full QA team in weeks—without the cost or complexity. By combining expert human testers with intelligent automation, MuukTest eliminates bugs, speeds up releases, and lets your developers focus on building great software.

In this episode, we speak with Ivan Barajas Vargas, Co-Founder and CEO of MuukTest. Ivan discusses the evolution and impact of MuukTest’s AI-driven quality assurance services. MuukTest leverages agentic AI technology to automate and maintain end-to-end tests, significantly reducing the time required for testing from weeks to minutes. Ivan emphasizes the importance of human testers in the loop, especially as software complexity increases. He shares insights into the challenges of test maintenance and the cultural and technical factors that contribute to test suite decay. Ivan also highlights the company’s focus on developing tools to support complex software testing scenarios, including the upcoming release of MCP servers and agents designed to enhance efficiency for test automation engineers. The episode concludes with an exciting announcement of free licenses for early adopters of MuukTest’s new tools.

Watch the podcast here:

Listen to audio only here:


Learn more about MuukTest.

Interested in appearing on the SourceForge Podcast? Contact us here.


Show Notes

Takeaways

  • MuukTest aims to deliver speed, quality, and cost together.
  • QA automation is costly when suites need constant upkeep.
  • Early AI work began before ChatGPT boosted momentum.
  • Customer feedback reshaped the product toward human-in-loop.
  • No-code testing works, but breaks at scale and complexity.
  • “Thoughtful testing” needs human context and QA expertise.
  • Test maintenance is unavoidable as software behavior changes.
  • Cultural neglect of maintenance causes suite decay over time.
  • Repair beats blind self-healing by keeping humans in control.
  • Repair fixes selectors and flows without rewriting from scratch.
  • LLMs improved incrementally; consistency varies by model.
  • Better outcomes come from training agents on real QA data.
  • 2026 focus: MCP servers and agents for SDT productivity.
  • First release targets Playwright repair with selector suggestions.
  • Launch includes free licenses: 500 yearly, 100 lifetime.

Chapters

00:00 – Intro and recap: effortless QA with agentic AI
01:51 – Why MuukTest was built: speed, quality, cost
04:40 – Early AI inspiration: DeepMind and legacy AI
07:43 – Why services informed product: testing the right things
10:07 – Why no-code tools fail at scale and complexity
12:22 – New direction: MCP servers for Playwright workflows
15:03 – Why test maintenance becomes painful so fast
18:13 – Why suites decay: technical change plus culture
20:13 – Repair vs rewrite and risks of blind self-healing
23:17 – Human-in-loop reduces hallucinations and false results
25:56 – Human context matters more with complex software
29:53 – LLM progress: incremental, agents drive precision
33:31 – 2026 roadmap: agents for SDTs and testers
34:27 – Repair MCP server release timing and integrations
35:53 – Release cadence based on most painful tasks
38:07 – Website and free-license offer details

Transcript

Beau Hamilton (00:00.154)
Hello everyone and welcome to the SourceForge Podcast. I’m your host, Beau Hamilton, senior editor and multimedia producer here at SourceForge, the world’s most visited software comparison site where B2B software buyers compare and find business software solutions. Today’s episode features Ivan Barajas Vargas, Co-Founder and CEO of MuukTest, who is making his second appearance on the podcast to share updates and dive further into the quality assurance services topic of discussion. Last time, almost a year ago exactly, we talked through how MuukTest is building quote unquote, effortless QA using an agentic AI assisted platform that can create, automate, and maintain end to end tests so that teams can get quality feedback without building a huge automation team.

We explored the company’s human in the loop model, why human testers will still very much matter, and why automation becomes an essential tool as release cycles pick up. And there was actually a case study that Ivan shared where a client they worked with went from running tests in three weeks to running them in about 30 minutes after using their platform, which I think is pretty incredible. And it really just speaks to the capabilities of MuukTest, but also how sort of groundbreaking these AI automation tools we’re seeing actually are. With that said, let’s bring in Ivan to build on those ideas and talk about where QA is headed next.

So, Ivan, welcome back to the podcast. Glad you can join us.

Ivan (01:42.446)
Thank you again for inviting me. It’s always fun to have this podcast and these conversations on QA testing, engineering, AI.

Beau Hamilton (01:50.85)
Of course, of course. I know there’s a lot to get into. And I want to just start to see if you can set the stage for listeners who maybe weren’t able to tune into the first episode, may not be familiar with what you’re doing. Can you share what originally led to the creation of MuukTest and what gap in software testing were you trying to solve for engineering teams, especially as they scale up in size?

Ivan (02:12.51)
Yeah, no, great question. Happy to talk about it. So I have been doing QA and testing for 22 years now. Like I started like as an intern. So, and when me and my co-founder, we started talking about MuukTest in 2019, like we saw how difficult, how expensive was to do testing and test automation. Like teams are spending millions of dollars doing testing and test automation. And at the end, we came to the realization that customers, they ultimately want is instantaneous feedback on their quality of the software, like feedback they can trust. So companies, what they want is to know if they push code, does my software have defects or not? So that’s why we created MuukTest, to help customers to get that trustable feedback super fast without the need of investing years in a project and millions of dollars. So we are super focused on providing that instantaneous, trustable feedback every time they change code.

Beau Hamilton (03:50.362)
Yeah, well, think that’s a good, I mean, really good approach where you focus on one area, do that one area very well, and then branch out from there, as opposed to trying to tackle everything all at once, because that just kind of sets you up for a headache and possibly even failure, right, if you’re just kind of fragmented and discombobulated with tasks and prioritization and whatnot. But really interesting, I love hearing about your background and your journey in the space and, you know, just where you are now.

I know you’ve obviously spent years in the QA services industry with your company starting in 2019. I think one of the most interesting quotes you had when we talked last was about how early you were to incorporate AI into your service. You said you had already been working on applying AI to the software testing and QA for something like four years when ChatGPT first came on the scene. So like when it came along and like made public and kind of really made a splash, it gave you all the momentum to really lean further into it as you and the general public really started to see the potential of this technology.

I’m just curious, like, can you talk about what that moment was like back then? Like what made it the time to bring your like internal tools to market? Did you have like a particular like aha moment?

Ivan (05:14.958)
So when we started with the idea, it’s like we started discussing in 2019, me and my co-founder about using AI to automate the whole software testing process. Like not just test automation, like test automation usually means just automating scripts and running them. So we talk about the whole process and one of the key sources of that inspiration was what Google was achieving with DeepMind at the time.

We saw lot of progress on AI and machine learning and deep learning and we said, okay, there has to be a better way to test and make the software testing process more efficient. So that’s where we got the inspiration and the first version of the prototype in 2020. We created tools with expert systems and symbolic reasoning to suggest test cases based on the findings.

And as you mentioned, then ChatGPT came later, 2023, around that time. And we started having LLMs to create new tools after that. So yeah, the inspiration was all the progress we were seeing on AI, machine learning, deep learning. We even went back to legacy AI with expert systems.

Beau Hamilton (06:42.478)
Very cool. Yeah. The industry kind of inspires itself. And I think it’s interesting too, like you came, you, came onto the scene sort of at a perfect time where we really, you know, had to adopt a lot of these AI or like just software tools with like, I think of the pandemic, which didn’t have a direct maybe impact on you, but indirectly everyone was like, you know, use, you know, on their computers screens developing more probably than ever. And so, and then also with the AI, you know, you’re using the technology to innovate in the space with the QA and also your other companies using AI to make software, they’re relying on your service as well. So you kind of create this like circular sort of companionship sort of approach, which I think is kind of interesting to think about.

But that’s I don’t want to get too far down the AI rabbit hole just yet, because I have one sort of maybe philosophical question while we look back at when you first started the company. So you spent years delivering QA as a service. So I’m curious, how did that experience shape the product differently than if you had started by building software like right away? If that makes sense. Like, there different?

Ivan (07:55.337)
So actually, yeah, we started first creating, we started with our product. Actually, we created like a no code platform with AI that in 2020, a prototype, but pretty quickly when we had the first version in 2021, we learned pretty quickly that customers, the platform was powerful, but customers were not getting the most of it. And when we started talking to customers and analyzing why they were not getting the most of it, it’s when we realized that the companies had even needs on being what to test, what to test. We started figuring out that the platform was great to automate tests pretty fast.

But many customers were not automating the right test. So that’s when it’s OK. For someone to get the most out of it, they need to be automating the right test in the right way. So that’s when adding expert QA engineers in the loop helped. Because now they could help the customers to think about what makes sense to test. How should I test it? How should I automate it? Practically the market told us like we need this help on figuring out the best way to get the outcomes we want.

Beau Hamilton (09:30.618)
Okay, so that’s like if you look back at your, you know, multi-year sort of journey in this space and the mission you’re trying to like solve or the problem you’re trying to solve. Would you say like what you’ve learned with your customers and how they use the tests? Is that been kind of the biggest sort of like learning thing you’ve the biggest thing you’ve learned in the past six years or so? Is there anything else that maybe comes to mind where it’s different than when you first started out versus what you’re dealing with now?

Ivan (10:06.006)
Yeah, yeah. So great point. So actually having direct direct contact with customers and the customer software has taught us a lot. One of the things we learned too was why the previous generation of test automation tools, knockout tools, you could say failed because I mean, many of them, they didn’t live to the expectations, most of them. So this contact with customers taught us that one of the main issues was that no code test automation struggles to scale with companies as their software gets complex. So it starts breaking in a nutshell.

So we learn where this test broke and why and that inspired the new product development that we started doing last year. And we are releasing this year, actually next month we are releasing the first, the first set of tools in that direction. So learning that no code, test automation was not enough for many use cases of complex software, we started developing tools that work on top of high code solutions, you could say, on top of Playwright, for example, to help the Playwright developers, SDT developers writing testing Playwright to be more efficient, to fix this faster to have better visibility of this coverage.

So in a nutshell then, what we learned was that no code was good for certain use cases, but it broke with complex software and complex teams. So what we are creating is kind of a complete solution that now can adapt. You can do no code, low code for simple software, simple parts of the software. But now we are creating MCP servers and agents to help developers, SDTs, to create code faster in a more efficient way that, yeah, so they can scale faster with these complex use cases.

Beau Hamilton (12:21.178)
So sort of like troubleshoot maybe the areas, the sort of sticking points, the friction areas that you were noticing, like maybe the limitations. So you’re working on agents to help kind of troubleshoot customers working with these low-code tools?

Ivan (12:36.248)
We have agents, but the first release is that it’s an MCP server with different agents that help a or a tester, an SDT that is writing tests in PlayGrid to pretty quickly fix selectors, for example. It suggests selectors and then the developer makes the decision, okay, yeah, let’s do that. Pretty quickly suggest how to fix a test if the flow change, for example. So kind of an agent helper that is the right hand of the SDT to become more efficient and helping them with the more in task of test automation, like selectors breaking. That’s the first one we are releasing, actually.

Beau Hamilton (13:23.354)
Gotcha. Okay, interesting. I feel like, I mean, it’s all about, it’s all learning experience, learning process. And then, you know, having been, I mean, really started leaning into the AI tools since like, I don’t know, 2023 or so, and you’ve been in this company for six years, you’ve gathered lots of data and insights about how companies and clients are using your tools. So yeah, you’re at that point where, I mean, like a lot of times with the agentic, we’ve heard like agentic AI for the last couple of years or so. And I feel like this year is really where you’re starting to see it actually deployed at scale. mean, took a while to get there. And so now it kind of seems like that’s kind of the phase where you’re at too, where you’re starting to deploy these agentic tools and really address some of the maybe shortcomings from the first iteration of your AI platform and tools and features, right?

Ivan (14:23.18)
Yeah, it’s more like completing because at the end, something that we learned is like, well, there is no one solution that works for every use case. So now what we are doing is creating these new tools to cover more complex use cases than no code and no code can cover. So it’s an expansion, you could say, to taking advantage of the learnings of working with customers for years, dozens and dozens of customers, hundreds taking advantage of that learning, putting that to work into new tools so customers can have a complete solution.

Beau Hamilton (15:02.404)
Gotcha. one thing I hear, so teams talk a lot about building tests, which in your industry, it’s that initial phase where automated test cases are created to verify how an application is supposed to behave at any given point in time. I think that’s the high level descriptor, the way I’d put it. But I’m curious, can you share why maintaining those tests, those building tests, becomes such a pain so quickly? Does the maintenance part of the equation become so difficult?

Ivan (15:37.454)
Yeah, so it’s difficult because at the end what test automation is doing is like a mirror of the behavior of the software. So at the end, with test automation with scripts, you are verifying that the functionality of the software is appropriate. And if the functionality of the software changes, the automation must change as well.

At the end, maintenance is one of the main challenges because as the software changes, it can break tests because the objects change IDs, it changes if they’re called selectors, if they change position, or if you add new behavior or change the behavior of the software, the test has to be updated to validate the new behavior. So maintenance then is complex because the test could break if they are not automated in the right way or the test automation doesn’t have the capability to fix the test. But also, teams have to be pretty conscious of verifying the new behavior because the software is changing how it works. So maintenance is critical for those two things. And that’s why it’s complex and still it’s pretty valuable to have an expert tester helping with that maintenance to make sure that everything is working in the right way and validating the right things, which is the most important thing.

Beau Hamilton (17:16.038)
I think that that makes total sense, you know, it’s it’s a contrast with like the the writing sort of writing the test in the first place and that sort of approach where on that on that phase, it feels like progress. You know, it’s like it’s visible. You can see the measurable results. It’s easy sort of to like celebrate when things work and go according to plan. But maintenance, on the other hand, I feel like it’s almost like feels more like a tax. Like once things start changing, whether it’s like a UI update, you have new features, you have a quicker sort of like release cadence, whatever it might be, that sort of maintenance tax sort of compounds quickly, especially if you don’t prioritize it where it should be prioritized. And suddenly teams are like spending more time fixing tests than actually learning from them. And so I think that’s where things start to get really like frustrating, you know?

I’m curious too, from a philosophical standpoint, from what you’ve seen across different teams and stacks, what usually causes test suites to decay over time? Is it mostly a technical issue? Is it a cultural reason within a company and a team? Or maybe is it a mix of both?

Ivan (18:37.896)
It’s a mix of both. One is it’s inevitable because at the end the software is changing. So if the software changes again, it will cause automatically the test automation suite to change. the teams, culturally, sometimes what I see is many teams don’t expect the test to change.

And they think they can get to the point that the test suite won’t change. And that causes them to don’t put enough resources and the software starts changing, the tests start breaking, and then the failures start compounding. And when they want to catch up, the test automation technical depth is huge. So culturally, it’s important to own that maintenance is part of the game. It’s not avoidable. It’s unavoidable totally. And culturally, the teams have to own that so they can assign the right amount of time to someone to make sure that maintenance happens in the right way. Using tools, using AI, you can use different tactics. But it’s important to acknowledge that change is unavoidable because the software is a living entity. So the test automation suite is a living entity as well.

Beau Hamilton (20:12.89)
I like that. It’s a living entity viewing it as this like living sort of breathing organism that needs constant sort of, you know, maintenance and whatever you want to call it, sustenance, food. I think it’s interesting way to put it. I remember and I believe it was the last interview, we discussed you had this term, I was like talking about repair, repairing tests. And I always kind of want to unpack that, like the way you you phrase that. I’m curious, what does repairing a test really mean in practice? And how is it different from like just rewriting, you know, a test from scratch?

Ivan (20:53.174)
Yeah, repeating a test versus rewriting. So there are a few years that tools have been having self-healing. So self-healing fixes the test that can be fixable. But the risk of automatically fixing a test blindly is that if the software behavior change and that change now cause an expectation to change. example, if you are running, if you are filling in a form and you had an automated test there validating that you were validating that selecting currency was US dollars, the euro, Mexican pesos. But now you also added Canadian dollars.

So the test could break and it could auto-heal. But if it was a mistake to aid Canadian dollars, someone needs to make sure that is this expected or not. So that’s a challenge with auto-healing and self-healing versus repair or rewrite. So when you repair a test, you could also give the chance of a human in the loop to verify the behavior pretty quickly and say, this expected, this is not expected, and automatically fix the test. If it is a developer at SDT using Plague, what we are doing is giving the options to repair the script so the SDT can say, yes, repair it with option one or option two.

I expect, in the example, expect Canadian dollars or don’t expect Canadian dollars and the test automatically is repaired. So, rewriting is then creating the test from scratch, which is, it takes more time. So the approach we are taking, we are calling it repair because it’s practically helping the developer, the SDT to select the right fix versus just auto-heal, which could cause false negatives and false positives.

Beau Hamilton (23:24.142)
Gotcha. Okay. That makes sense. Thanks for explaining that because it’s, it’s, you know, sometimes the terminology can get tricky, but I think that that does just, that’s an intuitive way to put it. mean, repairing implies like, continuity, intent, I guess, to fix and solve and refine versus rewriting is self-explanatory. know, it’s, you’re throwing, you’re sort of throwing things away and starting over from scratch which, like you said, takes a lot of time, more resources, just delays, most likely. But yeah, so in regard to repairing a test versus rewriting it from scratch, I think it’s better to preserve and keep adapting and rewriting, or not rewriting, but repairing, rather than sort of resetting the entire suite, right? I guess it’s circumstantial, but generally speaking, having this repairability is huge.

Ivan (24:24.632)
Yes, yes, yes, because it’s it’s optionality and it adapts to the software change versus automatically saying this is the fix. Right now with AI also there is risk of hallucination, right? So with the repeating approach we are taking where SDT can select the option or override the option we suggest, like now the humans have control to say this is true or this is not true.

Beau Hamilton (24:53.012)
So when we zoom out a bit, I think this, the previous question is sort of tied into how teams think about quality overall, but I’m curious, how does better visibility into what’s actually being tested shift the conversation between QA, engineering and leadership?

Ivan (25:02.754)
Yeah, that’s a great point. it’s something that different people have different views on that. the end, high visibility on test coverage can help to have discussions between development and testing, for example, and the management team on what’s been tested, who owns what. For example, developers, they still need to test unit testing and some integration testing. But after that, the testing teams need to own end-to-end testing and functional testing. So having a good view of test coverage and who owns what helps clarify the ownership of the different teammates and the different stakeholders on who owns what, who is testing what. And at the end, they can see an overall view of what is really being tested so they can make decisions of what else to test or not to test, also shift to production or not production decision. So in a nutshell then, what it does is to help the conversation. Having a good test coverage helps to drive the conversation between the different stakeholders.

Beau Hamilton (25:38.842)
How would you alleviate maybe, you know, some fears around, you know, automation and, maybe like developers and software engineers in the space who are, maybe concerned about, you know, the lack of human in the loop? Cause I remember, in the previous, interview, you’re mentioning there’s, you’re always going to have a human in the loop unless we hit this sort of like dystopian phase where we have this like general artificial intelligence level, right? Which is like, you know, there’s various timelines being floated, but I feel like we’re still a long ways out. And then, and then if that even comes along in the first place, it’s going to have more of a societal, you know, effect, a bigger picture effect than, than just the effect that has in the QA space. But I’m curious, has your, has your view changed around keeping humans in the loop since the last time we talked or, as more of these automations come down to the pipeline?

Ivan (25:56.11)
I think it has a strength to that. On the other hand, we have learned more where the power of AI and agents is and where they need expert help. So the last year, since we last spoke, we learned more and more what would be possible. for example, we were talking about one of the deficiencies or weaknesses of knockout tools, is they don’t scale with complex software. So that’s where more human in the loop is needed because complex software needs the comprehension of a human. We are calling that internally, thoughtful testing and thoughtful test automation.

There is highly the need of someone really learning the behavior of the software and also how the users use the software that will be tested to design the proper testing strategy and also the right test automation strategy. So they need to know the context, the human context. yeah, it has strengthened.

Beau Hamilton (27:13.914)
Yeah, that’s what I was thinking. The context. Yeah, yeah, the context is like, mean, AI is getting better with the context and tapping into maybe previous, like, I just think of some of the LLMs and the chats and stuff. It’s getting better at understanding who you are and the previous maybe discussions you’ve had with it and whatnot. the context is still lacking and you’re not going to be able to replace a human that’s been in the industry working on the QA for years and years and years just not at that stage yet nor do you even really want it to write you always want to have that like kind of insurance of having a human to overlook things and make sure Things are going according to plan and on the right track, right?

Ivan (27:58.636)
Yeah, that’s right. And at the end, is powerful. mean, don’t get me wrong. So we are using AI a lot. But in the way we think about it now is what tasks that expert does every day, every hour, every minute that we can accelerate with AI, but they still have control to make a decision on what’s right and what is not right. So practically in the way we see it now is, how do we leverage AI to automate those routine tasks day by day and still leave the human in the control and in the decision point?

Beau Hamilton (28:40.854)
Yeah, now I know. So obviously, AI has been a big part of your story. was a founding technology and it’s, you know, it’s only continued to kind of become more useful and improve in its capabilities and whatnot. one question I was eager to ask you in preparation for this interview was just like where AI has improved specifically in this past year, you know, because, I mean, here we are, getting updates from you having talked with you almost exactly a year ago now. And, again, the industry just keeps like AI doesn’t seem to be slowing down. There’s more concerns, hesitations, whatever you want to call it around this. Supposedly bubble that’s building. I think, you know, there is a capital sort of bubble around all the money that’s being spent and whatnot, but it doesn’t seem like AI is going to be here. It’s, it’s, it’s here to stay.

But in terms of progress and what the capabilities you’re seeing it get better at, how has it impacted health and test maintenance right now? What’s it capable of doing now compared to what it was capable of doing like a year from now?

Ivan (29:53.502)
So general LLMs, I think they have incremental changes rather than radical. So practically from last year, there is not radical changes. There are no radical changes regarding LLMs. So they have got just incrementally well. Even sometimes the changes have been decremental. Like the quality of the AI has been lower for some use cases.

And actually many of the companies in the AI space using LLMs, like they have faced the issue that with some models or some version of the model, their solution, their prompts work better than in others. So it’s a tricky part, but the way to fix that is how we do it is we also have been training our models and we have years of data, years of real test cases and we have been dedicating a lot of time to make our own agents also experts. now our agents, I think that’s a big advantage, that’s the big job for us. We have been training our agents to be testing experts and test automation experts that now they can use the underlying LLMs in a smarter way.

So more targeted and more precise. So then, in summary, general LLMs, have had incremental performance gains in the last year, you could say. But the companies that have been taking advantage of learning and keep training their agents and keep gathering the important data and iterating with prompts have got to a better place of precision so they can get the right prompts and the right outcomes better. So more fine-tuned. So that’s where we have improved and that’s the big change we have been taking advantage of.

Beau Hamilton (32:08.888)
Which is still really important and doesn’t get all the credit necessarily or the shock factor of service level understanding of what things can do. Just by refining the service, refining some of these AI tools, just so they work more consistently, understand more of the context, just work when they’re supposed to work. I think that’s really important.

I think that we get so accustomed of like comfortable or like, starting to incorporate some of these tools in our everyday workflow. kind of stopped like the wow factor has kind of started to wear off. but, yeah, I think it kind of goes back with what you were saying about just having the like agentic kind of AI come in. And if you’re having, if you’re stuck in a particular area, you’re starting to hit some limitations, it’ll come in and start offering solutions to the, the QA engineer. Right. And start working with them.

I’m curious, where, like, so one year later, we’re further into this AI future that we just continue to keep barreling towards. Where do you see things evolving in regard to QA? Like, what’s your focus here for 2026 and maybe over the next few years, if we can even plan that far ahead.

Ivan (33:30.894)
Yeah, so 2026 with all the learning we have had, learning on what AI can help us with, learning on what customers really want to achieve, that’s another important part. For 2026, we are leveraging these learnings in technology and knowledge to accelerate testing. But when we say accelerate testing, it’s helping testers and SDPs to their work faster.

For example, for SDTs, 2276 will be a lot of focus of us for SDTs and test automation engineers to help them to be more efficient for these complex use cases. Because if something is clear from the last few years is for complex use cases, you need SDTs automating and thinking what should be automated and how. So what we are doing is releasing in 2026 different tools, MCP servers that can be integrated in any environment, the Visual Studio Code, the Coursor, Cloth. So MCP servers that help with a specific task that are annoying and difficult for SDTs. For instance, we were talking about repair earlier on. So next week we are releasing a repair MCP server that automatically helps suggesting new selectors that are strong selectors or selectors to fix playwright tests on the spot. And we will keep adding capabilities. So, 2026 will be the year where we will be releasing MCP servers and agents to help in a specific task for SDTs, test automation engineers and testers.

Beau Hamilton (35:22.66)
Very exciting. Yeah, you beat me to it. I was going to ask you when the first sort of set of tools were being rolled out with these MCP server upgrades you’re talking about. And it sounds like by the time this episode comes out, the first set of offerings will be available. So that’s exciting. Do you have a particular release cadence? Or is it like you’re constantly just working on these tools, and when they’re ready, you roll them out? Or are there like, hard dates that you have planned for your rollout?

Ivan (35:56.846)
Yeah, so we are constantly working on this task because in the way we plan it is like what task is difficult, annoying or boring for SDPs or testers and we can automate with AI or we have already automated with Moctes and we can release one specific function. So we have already a handful on the works, but how we decide the release cadence is based on how painful it is for the potential users so they can get help faster. So we’re starting with the repair by the end of this week, early next week. And then we are continuing with the control hub that kind of integrates also in existing repositories of Playwright or Playwright. And it can help to define and have a view of the test coverage and visibility and actionable plans.

And then we will continue releasing other MCP servers that help suggesting tests for happy paths very quickly. Suggesting tests for negative paths. Everything within the existing environment, Visual Studio Code, Coursor, or Cloud, whatever, or SDTs, or customers’ SDTs are using. So that’s the cadence so far. But yeah, we’re starting. By the end of this week, early next one with the repair agent.

Beau Hamilton (37:21.742)
Very exciting, very exciting. OK, cool. Yeah, I like the level of the priority of prioritizing the tasks, the most tedious tasks that you want to automate and really tackling those issues first with your automation solutions there. I think that’s a good approach. Really interesting. lot to look forward to. Yvonne, it’s always a pleasure. I know it’s really just fascinating to learn more about this space and this particular kind of like, specific use case for AI and in your industry. It’s really fascinating to learn more about it. And I appreciate everything you shared with us. And for myself and for listeners curious to learn even more about MuukTest and the platform, where would you recommend they visit?

Ivan (38:06.51)
muuktest.com. M-U-U-K-T-E-S-T dot com. And actually I forgot something with the releases. So with this release of the MCP servers and the new agents starting with repair agent by the end of this week, we will be giving 500 free access, 500 licenses for SDP’s free access for the first year, and actually the first 100 for life. So yeah, go to muuktest.com to learn more about it if you want to take advantage.

Beau Hamilton (38:49.966)
Very exciting. All right, yeah, for listeners who have made it this far into the episode, I think you get that special bit of information there. First 100 users get a lifetime access, essentially. That’s exciting. All right, I think you’ll find out more information down below in the description if you’re curious.

All right, that’s Ivan Barajas Vargas, Co-Founder and CEO of MuukTest. Ivan, thank you so much for joining me again. And I’m excited to have you on the show again, like maybe hopefully not quite a full year, away from now, but, if that’s the case, you know, we’ll have even more to talk about, but it’s been a pleasure.

Ivan (39:28.738)
Thank you for the invite. It’s always fun to talk to you.

Beau Hamilton (39:32.641)
Absolutely. All right. Well, thank you all for listening to the SourceForge Podcast. I’m your host, Beau Hamilton. Make sure to subscribe to stay up to date with all of our upcoming B2B software related podcasts. I will talk to you in the next one.