Overcommitted

Overcommitted brings you software engineers who are genuinely passionate about their craft, discussing the technical decisions, learning strategies, and career challenges that matter.



36: Ep. 36 | Navigating the future of AI agent security with Dan Moore

Summary In this episode of the Overcommitted Podcast, Erika and Brittany discuss the evolving landscape of AI agents and their implications for security and identity management. Joined by expert Dan Moore, they explore the challenges posed by non-deterministi...

Show Notes

Summary

In this episode of the Overcommitted Podcast, Erika and Brittany discuss the evolving landscape of AI agents and their implications for security and identity management. Joined by expert Dan Moore, they explore the challenges posed by non-deterministic agents, the importance of granular permissions, and the need for developers to be aware of security practices as AI technology advances. The conversation also touches on industry standards, the role of developers in navigating these changes, and personal reflections on the future of AI.


Takeaways

  • AI agents are changing the landscape of software development.
  • Non-deterministic agents present new security challenges.
  • Granular permissions are essential for securing AI agents.
  • Developers must be aware of security practices in AI.
  • Industry standards for AI security are still evolving.
  • Separation of concerns can enhance security for agents.
  • The role of identity and authorization is critical in AI.
  • Business implications of AI agents are significant.
  • Developers should stay close to business needs and problem-solving.
  • The future of AI will require new skills and awareness.


Links


Hosts


Episode Transcript

Erika (00:00) Welcome to the Overcommitted Podcast, your weekly dose of real engineering conversations. I’m your host today, Erica, and I am joined by…

Brittany Ellich (00:09) Hey, I’m Brittany Ellich

Erika (00:11) We met while working on a team at GitHub and quickly realized we were obsessed with getting better at what we do. So we decided to start this podcast to share what we’ve learned. We’ll be talking about everything from leveling up your technical skills to navigating your professional development, all with the goal of creating a community where engineers can learn and connect.

Today, we are not just talking about securing users, we are talking about securing autonomous code that they control through AI agents.

Specifically, as these agents move into our enterprise systems, we’re asking the fundamental questions of who are you and what are you allowed to do? And these become huge challenges for our security stack. So to help us navigate this critical shift, we are bringing in an undisputed expert in identity and authentication, Dan Moore, who is the senior director of CIA Strategy and Identity

standards at FusionAuth. We will be discussing what current identity protocols are being challenged by the rise of agents and what new standards are emerging to give autonomous coding agents a more secure, verifiable, and official identity. So, Dan, welcome to the show. Let’s talk about this evolving landscape of identity.

Feel free to introduce yourself. Say hi.

Dan Moore (@mooreds) (01:37) Sure,

yeah, yeah, hey, thank you so much for having me. Brittany and Erica, I am thrilled to be here. the short answer about who I am is I’ve just been in the developer world for about 25 years, played on a lot of different roles. My current role, as you mentioned, is at FusionAuth. We’re an authentication provider. we are really focused on customer authentication, but as…

you alluded to, a big part of that, at least in the last year or two, has been, well, what is agents? I know we’re gonna kind dig into what that is, but we are extremely interested in kind of helping people secure their systems for users as well as for agents.

Erika (02:18) Well, yeah, you teed it up perfectly. Let’s start there. When we talk about AI agents, what exactly are we talking about? Is it just a fancy name for a piece of software, or is it something different? And do we even have a definition?

Dan Moore (@mooreds) (02:36) Yeah, I think it’s a great question because I think the honest truth is that people are, there are definitely people out there using AI agents to accomplish tasks, to get stuff done, especially in the coding realm, but also outside of the coding realm. But I think that we’re still kind of groping our way towards an actual real definition of something. But the way I think about it,

and this came up in conversation the other day, is it’s really like a workflow or a set of pieces of software that can accomplish workflows, which we’ve had for decades, right? Workflow software’s been around for a long time. The difference is that the workflow used to be defined in code or in static configuration, and now it’s more in natural language. And so the idea you can actually kind of give an agent a task in relatively natural language and have it…

not to overuse the word grope, but grope towards accomplishing it, which is what they do, is to some extent a game changer.

Erika (03:33) Well, so yeah, when this piece of code does start making decisions on its own, how does that change some of these security risks that we face? And what are the biggest things that we need to worry about when it’s an agent authenticating or authorizing versus a human?

Dan Moore (@mooreds) (03:58) Sure, and I will say, it is worth taking a step back and saying, the security problems that these new agents create are similar.

maybe they’re a little bit different in scope, they’re similar to the security problems that we have right now, right? Because there’s plenty of security holes that humans drive through that agents are going to also be able to kind of attack or exploit. I guess I just want to say before people get all concerned about securing AA agents, they should be like, hey, what are we doing to follow all the rules and all the things that we should do to secure stuff ourselves?

Anyway, so set that aside. The way I think about it is that human beings are slow and non-deterministic and software as written five years ago, 10 years ago, even the vast majority of software right now is…

fast and relatively deterministic, right? Charity majors might have some issues about that, how deterministic it is, but it’s pretty deterministic and easy to reason about, whereas agents kind of fall in this middle ground, and so they are fast and non-deterministic.

So there’s this great thing called the lethal trifecta, which Simon Wilson, who’s written a ton about AI and LLMs in general talks about. the idea is that agents have access to private data, which again, software does too. It’s exposure to untrusted content, which again, software does too. And then…

there’s the ability to externally communicate. And so the issue is that because agents are non-deterministic and because they read that untrusted content, they can then be instructed to access your private data and then send it off in an email. And this is a real, I mean, I would consider this to be a new threat because of the ability to follow arbitrary instructions, right? Like if a piece of code got an arbitrary instruction, it didn’t understand.

I should be careful here. There are ways to craft it, but it’s a much higher bar to like make a deterministic piece of software do stuff it’s not supposed to do, whereas with agents, they found over and over again, it’s relatively easy to do that.

Those are kind of the bigger changes that I see coming down the pike is how can we address this? I mean, the non-determinism is the joy and the pain of agents, right? Like the fact that we can like give it something and as I mentioned earlier, it can kind of maneuver its way towards achieving a goal is a huge win. But that also means that other people who can communicate with it can do the same thing and kind of bend the agent away from the goal that we set it.

Erika (06:26) And non-deterministic in this case, meaning, can you like sort of define that with what you mean there?

Dan Moore (@mooreds) (06:32) Sure.

Yeah, yeah, yeah, that’s a great, great, sorry, I shouldn’t, I work in the auth space and like using jargon is something that like I try not to do because it’s so easy to kind of get over your skis and assume kind of everyone knows what you know, but that’s a great call out. So non-deterministic in my mind means that, or sorry, let’s start with deterministic. Deterministic is you give,

the same inputs and you get the same output. So math is a perfect example of the deterministic system. Gravity to some extent is too, Whereas non-deterministic means you give it the same inputs and you won’t necessarily get the same outputs. Weather, human society, conversations with your partner, like these are all things that are non-deterministic because they’re affected by the current state of things. And so if you’ve ever asked

the same question of an LLM, you’re gonna get back slightly, I mean, here’s the thing, they’re slightly different, right? Or they could be kind of massively different depending on the way that, well, I should be careful here because I’m definitely not an LLM creation expert, so.

But at the end of the day, it’s not a system where you put in the same thing and you get the same thing out.

Erika (07:50) Makes sense. Yeah, so you’re saying that that creates scenarios where it’s unpredictable how they might act in different scenarios.

Dan Moore (@mooreds) (08:01) especially with untrusted input, right? Like that’s where it gets really kind of scary. And the honest truth is, unfortunately, we think about untrusted input as something an attacker provides, but it could also be we make a mistake, right? Like we fat finger something, or we aren’t as clear in our tent as we want to be. And that’s all input that like, you aren’t sure how things are going to go. And that’s a new kind of security.

rep model.

Erika (08:28) And yeah, I guess it, like there’s different boundaries where this can be dangerous. Like within your own system is one thing and then sort of like crossing system boundaries too.

there’s the question of like how much context how much data gets like carried across from one system to another and you’re talking about like, privacy, like, you know, how do I authorize this context versus something else?

Dan Moore (@mooreds) (08:57) Yeah, I think actually that points to one of the solutions to this is that you start to have these kind of subagents that you know may have access like I mean one way to deal with that lethal trifecta I mentioned is that like you have

different agents that have access to different sets of tools or different sets of data and if you have one that can read your email and another that can send your email, it’s gonna be really hard for an attacker to be able to kind of get that email.

you know, that private data sent off because you have to like basically now attack two different LLMs. So kind of more separation of concerns, which again, this is not software. It’s not revolutionary, right? It’s a lot of the same principles. We’re just having to apply it in new context, right? Separation of concerns is a great thing for security.

Erika (09:47) Right, yeah, going back to your point where the best practices for human authorization is the same idea for agentic authorization. You don’t want to give a human the keys to everything across all system boundaries. We try to enforce data protection and data privacy and minimum required access controls.

same rules apply for agentic identities as well.

Dan Moore (@mooreds) (10:16) Yep.

Brittany Ellich (10:16) This is very fascinating. I don’t personally work on the identity side at all, so I’m just a software engineer and I just want to know like what do I need to do to keep these things working and not exposing anybody to problems? But I’m curious. I do work on the enterprise side and I’m curious what you’re seeing right now in terms of enterprise adoption of agents. Are there any like specific business or administrative problems associated with with?

that companies are experiencing or butting up against when they want to use agents.

Dan Moore (@mooreds) (10:45) I mean, I guess I would say that like first of all, it feels like there’s a ton of people that are experimenting with this stuff. And I think there’s definitely a feeling that there’s a there there. I think that I have not seen some really large public like case studies where people are like successful with this. And I don’t know whether that’s because it’s harder to be successful with or because the people who are successful are keeping their lips.

shut because it’s like a big competitive advantage. You definitely hear about people doing this around coding. I definitely have interviewed people that are doing the spectrum in development and finding that to be very successful is kind of a one person show, which obviously is kind of the opposite of an enterprise, which actually speaks to, think one of the complexities is I feel like when you’re doing greenfield development agents.

are an amazing productivity boost, whereas when you’re doing brownfield development or improving existing software, suddenly there’s a whole lot more context that needs to be available to the agent for it to make intelligent suggestions, and it’s less of a…

It’s less fun, right? Or maybe it’s less, that’s wrong word. Fun is something that human beings think about green field development. It’s less capable, might be a good way to put it. But I think that around your specific question on enterprises, anything that is doing text evaluation, I talked to somebody recently who is doing extraction of form fields.

know, lot document management stuff, think AI is a good fit for, you know, as far as kind of pure agent stuff, like the honest truth is, I feel like there’s a lot more smoke than there is fire right now. There’s a lot of people who are like building out all these agent systems. And then you ask them like, hey,

I shouldn’t say a lot of people. I’m familiar with a couple of people who’ve built out some agentic systems and you’re like, this is cool. Like who’s using this? And they’re like, well, I am to build other products. And you’re like, that’s great. Right. And that is a definitely use case, but it’s not like the enterprise is like pulling people in and like saying, Hey, we need to do this. Now, one thing I have heard is that

Having a natural language interface of an LLM into data and into documentation is very powerful for improving the knowledge worker experience, but I don’t know I would call those agents, right? That’s like more just like a more natural language interface.

Brittany Ellich (13:02) Yeah, that makes sense. Are there any standards or is the industry approaching any sort of way to make sure that these are actually secure? I feel like you said that there’s not a lot of people doing this yet, but there’s lots of concepts of plans towards doing it. I’m curious, is there anything that the industry as a whole is saying, okay, this is a way that we can trust securing agents?

Dan Moore (@mooreds) (13:26) Sure, so you actually kind of asked two different questions, is…

Are there standards and are we converging? I think that like there are hundred percent standards. I just went to the IETF meeting in Montreal and there was, I think AI was mentioned in every single meeting I went to. And I went to most of the web stuff. I didn’t go to like the telephony stuff, but like there was an agent draft mentioned there. I think in every single meeting I went to, but I think that it’s all still early, early days.

And so they’re trying to solve, in some cases they’re trying to like think of agent identity as workload identity, right? So there’s the whimsy group, there’s a draft from Aaron Parecki who’s an octoguy who’s talking about cross, cross-trust boundary authentication. I feel like

There’s two kind of main ways that agents communicate. There’s the agent to agent protocol, which basically has a lot of different, it’s kind of a pick your flavor kind of.

protocol when it comes to identity. And then there’s MCP, is Model Context Protocol. And that seems to have been kind of standardized on OAuth for the majority of kind of enterprise use cases. so there’s a lot of activity in the OAuth working group around some of the kind of follow on standards around that. Like how to make it easier for MCP clients, which essentially are like IDs or other kind of

AI agents to register with an authorization server and then like what goes in the token. But to me, a lot of this comes back down to what we talked about.

earlier which is doing stuff that we know is good practice which is like principle of least privilege, its sophisticated authorization schemes maybe past RBAC, right, once you get a certain size like ReBAC or ABAC or PBAC and the nice thing about employing those is that helps you whether you’re implementing for agents or you’re implementing for users at scale and I still think that those are

underappreciated and under implemented. But maybe AI agents will be a catalyst for pushing more people to handle that.

Brittany Ellich (15:37) Yeah, it kind of seems like the average developer needs to know a little bit more about security and auth than they ever really needed to before. At least right now until there’s like, you know, an idea around like the right way to do this because people are still trying to figure it out. What would you rec… go ahead. Yeah.

Dan Moore (@mooreds) (15:53) And actually, can I ask you real quick? mean, so

when you say that, like, do you mean because they’re using AI agents and so they need to have some consciousness about that or because they’re building software that will be used by an agent or why do you think it developed? Why do you think now is the time for? Because security people have been like developers need to know about security for, you know, shift left and like, you know, for a long time. So why is now why is it more important now in your opinion?

Brittany Ellich (16:22) I think that there’s like more out of the box options for regular auth than there is for AI agents where like I would never do it. I would never roll my own and figure out how to do this for like any website that I create. But if I’m creating some sort of an AI agent, like it doesn’t sound like there’s like a thing that exists yet that I can say like, okay, this is going to cover a lot of the security concerns. ⁓

Dan Moore (@mooreds) (16:43) I

that makes sense. And I think some of that’s because it’s early days and some of it’s because the protocols are still being like hammered out, right? Like the MCP spec, there’s a new release coming in 11 days and it’s going to have some shifts even in how you do kind of OAuth for MCP clients. And that’s like, you know, it’s only been out for a year and there’s already been like multiple different ways to do stuff. So that makes sense. Thank you.

Brittany Ellich (16:50) Mm-hmm.

Erika (17:11) I think also like, unawareness that this standardization is like still forming is also important because I mean, I guess there’s like the whole legal side of things where like that isn’t even figured out yet necessarily of like, if I like if I direct an agent towards a certain goal and it takes nefarious

to get there, am I responsible? And like, I don’t know, like we’re mentioning like what, you know, the risks are as a developer, like why you might need to know, like, you know, as a developer knowing like, hey, somebody potentially like promises me this agentic workflow that’s gonna like solve all my problems, like, okay, well, like, you know, both eyes open, like.

look at what it’s actually doing, like understand that, you know, this, this, like identity and authorization piece might not necessarily be a completely solved problem yet. So like if somebody’s promising you like, you know, all the guardrails are there and like, don’t worry about it. Like maybe question that, maybe look at it twice before you like, again, like kind of authorize it to do, something that you wouldn’t want to be responsible for.

Dan Moore (@mooreds) (18:32) Yeah, that’s a great point. There is a little bit Wild West feel to a large chunk of this right now. And just like, I think…

I mean, I’m trying to think of like an analogous situation. Honestly, it probably is a little bit like the early internet, right? When people weren’t sure like what to do about credit cards or could you even pay with something over the internet with a credit card? And developers, to Brittany’s point, needed to be a lot more like cognizant of that kind of stuff. Now it’s a lot more accepted. So there’s, you don’t even think about that kind of thing, but we’re in back in the early days, which is, which is exciting and scary and spooky all at the same time.

Brittany Ellich (19:12) I feel like I’m hearing that crossover and overlap a lot in a lot of different spaces right now. Like everybody’s like, this feels like the dot com bubble, you know, not just in terms of what’s coming out, but like economically and like, you know, just the vibes that we’re getting. And it’s going to be interesting because I think, you know, obviously we’re still using the internet, even though at the time I would imagine it probably felt like we were, you know, not going to for a while. So,

Dan Moore (@mooreds) (19:38) I think people always knew that it was gonna be big. I don’t think people like knew

exactly how in depth it would get into our lives. But I don’t think anyone after even like 99, 2002 when it was like the depths of the nuclear winter, people weren’t like, I’m never gonna use the internet again. It was just too easy to do things. So, you know, and I think that is exactly analogous here is like, there’s no doubt in my mind in five years from now, we’ll be using gen AI in some form, right? Like what it actually looks like, I don’t know what’s the Google maps of gen AI.

You all are too too young to remember Google Maps, but it was it was amazing when it came out you’re like my god I can scroll around and like and it blew my mind and it’s not it came out of Ajax it came out of the early internet, but like it was not something that like any Anyone people had to invent it so like what are the things that people are going to invent with AI? so I were kind of far afield from authorization and identity, but

I don’t want to be like an old man shaking my fist on the lawn, I feel like we are going to have those kind of moments again in the next couple of years.

Brittany Ellich (20:47) Yeah, yeah, I agree with that. is there, so before we go back to the authorization space, I’m curious, you having had the experience of living through that time, is there anything that you’re telling developers right now, like things to do for their career to like, you know, bubble, prevent themselves or anything like that to like, is this, I mean, obviously these generic skills aren’t going to be completely useless. Cause like you said, we’re probably still going to be using it. I’m not going back to the time where I had to write tests.

myself. Like that’s, just not happening. So I feel like there’s still going to be something that’s around. So are these skills still worth investing in, in your opinion?

Dan Moore (@mooreds) (21:21) AI skills or what skills I’d recommend.

Brittany Ellich (21:23) Yeah, AI

skills, AI development, know, like building these AI agents, it worth, you you think these are still going to be around?

Dan Moore (@mooreds) (21:30) I mean, I don’t know enough about AI agents to tell you that, right? I think that like whenever there’s a new tech coming on and this happened with mobile, it happened with cloud, happened with the internet itself.

happened with like react like there’s like two paths or you can either be kind of at the forefront surfing that and like making the investment in time and energy to kind of be that local an expert of some kind right you can be local to your company you can be like in your community you could be like worldwide like but like staking a claim to that and I’ve seen people create great careers doing that then there’s the people who are like actually I’m gonna be a follower and I’m gonna wait for things to shake out which is a little

it’s a different kind of risk, right? Because it’s possible that you could miss something that…

is good and you won’t be able to like stake your claim as widely, right? If you’re the thousandth React developer, you’re not going to be as known as the second React developer, but you also miss on things, right? Like, so you might miss Ember.js or Meteor, right? Which were big, big, you know, front end frameworks for a little while. And then now I think they’re still used, but like, they’re definitely not winners. And so if you’re a world-class expert in Ember, your options

are a lot smaller and you’ve invested a lot of time and effort than if you’re a class actor to react. So the only thing I would say and if someone came up to me and said hey how can I avoid getting smacked around by this bubble it’s like be close to the business, understand how to solve problems, be a collaborator and you know.

and be aware of these technologies. I don’t think you need to be an expert in them, but think you need to be aware. mean, listen to podcasts like this one, right? And like, be aware of that. Be aware of this kind of stuff so that when it gets to the point where you’re, you’re not blindsided, but being close to the business and like, I mean, I honestly think that there will be an idea of a developer 15 years from now because…

taking people’s wants and desires and determining what’s real and what’s what they will pay for and what how it fits into complicated techno social systems like software like big software projects that is not a normal skill and that is a skill that like I think developers are well suited to like offer not programmers you know as I use the term programmer I said developers in my in my mind.

Erika (23:58) Thanks for the plug for this podcast. I appreciate it. Extra boost. Let’s go back to authorization and dig a little bit more into technical details. So when an agent needs to open a file or an internal tool,

In your mind, how is the best way for it to prove that it has permission? Is it sort of giving a machine a broad security pass, or do we think that the system needs to evolve to adapt to having a more native agent ability within the system to have specific granular permissions?

Dan Moore (@mooreds) (24:42) I mean, I think that…

Granular permissions are the way to go for sure like and if you know I think that the two paths you can lay out are like API keys and Or tokens right or access tokens, and I feel like to some extent we talked about echoes of the past like this whole AI making like Software like calling other software services feels a lot like the API landscape of like the 2010s

And we’ve been through that and I’m not saying OAuth is perfect. I’m not saying bearer tokens are perfect, but there’s like a lot of infrastructure around it. It’s kind of a well-known protocol. And so I think that…

OAuth tokens properly constrained with scopes and probably fine-gained access behind that, Reback, RBAC, PBAC, we talked about that a little bit, is probably the path forward. I’ll also say that’s in kind of production context. I think there’s this idea of like non-prod context of like your own laptop where you can YOLO things a lot more. And I definitely have read, this was probably six months ago where someone was just like, I just gave clog total access to my computer and it can do whatever.

it want and everything’s under git control, you know, and push remotely. So if it totally blows things up, I don’t care because I just kind of fall back and then I can just type whatever I want and and clod can I can run wild and I would never do that in production but in non-prod it seems like it’s a good way to build an intuitive sense for like how this piece of software, you know, interacts with the environment it’s in.

Erika (26:14) Makes sense. For Fusion Auth specifically, how are you focused on identity right now and what are you planning in this new wave of non-human users?

Dan Moore (@mooreds) (26:25) Yeah, I would say first of all that…

As I kind of mentioned earlier, there’s a ton of still web apps that are like, you know, Brittany, you said that you know how to put Auth, use Auth into a normal app, and I totally agree that a lot of folks do, but there’s a ton of folks, and I’ve been at conferences where they’re like, yeah, we still maintain our own Auth system, or we rolled our own. So there’s a lot of hate being made just from that perspective, right? But we are an OAuth system. ⁓

support

some of the grants that I would expect agents to use. I’ve been playing with

AWS Agent Core, is a agent building framework that AWS is going to be, they just went in GA and they’re going to be talking about a lot of re-invent. I’ve been playing around with that and that’s all using kind of client credentials grant for OAuth or the authorization code grant. those are kind of, client credentials is good for when you have agents talking to each other kind of that are independent. And then the authorization code grant is great when it’s Dan who

wants to delegate access to an agent to go update his Google Calendar or interrupt his other service on behalf of Dan, so the delegation scenario. So I think that in…

I don’t see right now any reason from the identity and authorization space to kind of reinvent the wheel. It feels like we should push OAuth and I mentioned it was being brought up in the IETF. There’s definitely extensions coming that are talking more about that aspect of it but I think that there’s no…

From what I can see, there’s no reason to kind of throw the entire thing out and kind of start from the beginning again because we have these building blocks that work for software and work for humans right now. as I mentioned, agents are kind of a mix. And so I think we can continue to push the OAS specifications forward and solve these problems.

Erika (28:22) I am going to take some homework and learn more about the OAuth spec. I’m now very interested in more about how that’s implemented.

Dan Moore (@mooreds) (28:31) Yeah, please. I can send you some articles if you want because I think there’s definitely some good content out there that’s talking about this in the general sense and also in the context of just agents. Yeah.

Erika (28:31) Yeah.

Yeah, that’d be

great. We can share them out in the show notes, too.

Dan Moore (@mooreds) (28:47) Awesome.

Erika (28:48) Well, this has been a fascinating conversation and we always end with a fun segment and today’s fun segment is inspired by this topic of agents and you mentioned subagents at the beginning and for anyone who hasn’t worked with a subagent, a lot of them are defined with this like specification file which is basically text that tells them

you are, I mean, any agent, you kind of tell it, this is what you are, this is what you’re good at, and sort of like super boosting their powers. So I thought it would be fun to think about if we were agents, what spec files we would write for ourselves to super boost our abilities.

So I can go first. I would… I would do something around communication and writing because I always think that’s a really hard piece and I think…

specifically like asking myself follow-on questions and like understanding what other people would ask or like what other people might think about. So, you know, my spec file might be something along the lines of like generating different perspectives and having like helpful sort of like discussions, you know, sort of like a set of rubber ducks that all have different identities.

can like ask me different questions to make sure I’m thinking through something fully. So that would be that would be my super power spec file. Brittany, do you want to go next?

Brittany Ellich (30:29) This is like an incredibly introspective question, masquerading as something that is not. love this. ⁓ I think that if I were to have like a spec file or at least like a desired trait basically that I would write, it would be that I’m really good at like following up with people and like keeping in touch with people. I feel like that’s a skill that I’m not super good at and would like to get better at. So I’d be an expert in…

Erika (30:31) Good.

you

Brittany Ellich (30:56) friendship or something like that. Really good at, you know, keeping up with the folks that I already know and like building those stronger ties. Dan, do you want to go next?

Dan Moore (@mooreds) (31:06) Interesting, yeah, wow. I feel like we could get an entire podcast about this question. And one thing that, this is kind of off topic, but like, it almost sounds like a spec file or a skills file is like a mantra for the agent in something that like repeats to itself and like becomes better at. So am I, ooh, does that mean the reverse is true for us? Anyway, mine would be my entire life,

been my entire adult life beliefs has been driven by my my desire and lack of fear and willingness to ask good questions and importantly I think listen to the answers and as I’ve gotten older I think I have a little more

nuance about when the right time to ask the questions is, so I’d kind of add that in too, but I also would say that I think each of these spec files comes with a license, and mine is gonna be GPL licensed so that it affects other people with the same skill to like ask good questions and listen to the answers.

Erika (32:07) Well, thank you both for going with me on that journey of introspection. I know it was a little meta, it feels fun to think about. Well, thank you again, Deon, for joining us. If people want to find you, where can they look you up?

Dan Moore (@mooreds) (32:24) Sure, yeah, so I’m on LinkedIn, just moreDS, M-O-O-R-E-D-S, or Dan Moore Boulder will pop me up, or Blue Sky is the other place that I hang out and talk a lot, and that’s moreDS.com is my profile name, and then if you wanna learn more about Fusion Auth, or I also write on the blog there a lot, it’s fusionauth.io, and would love to connect with anybody on social.

know people always say this, but I’m happy to hear from anybody who has discovered me or my thoughts through the Overcommitted Podcast. I’ll be happy to connect.

Erika (32:57) Well, thank you listeners so much for tuning in to Overcommitted. If you like what you hear, please do follow, subscribe, or do whatever it is you like to do on the podcast app of your choice. Check us out on Blue Sky and share with your friends. Until next time, goodbye.