Jonathan Tamsut (00:00) Welcome to the Overcommitted Podcast, where we talk about our commits, our commitments, and some stuff in between. I’m your host, Jonathan Tamsut joined by my fellow code enthusiasts, Bethany, Erika, and Brittany. We’re a crew of software engineers at GitHub. We first connected as an onboarding group on the same team in GitHub Actions and quickly discovered our shared passion for learning and building cool stuff.
Even though we’re scattered across the country, we still come together regularly to talk about tech, share what we’re learning, and just talk about our lives. So whether you’re committing code or committing to new challenges, we’re glad you’re here. Let’s dive in to our topic today. AI ethics. So I can tee this up for us. So yeah, so you know,
This is kind of a hot topic today. think there’s a lot of voices in this space, a lot of people talking about sort of the existential risks. think one thing that probably doesn’t get talked about enough is the impacts of our technology. I think we’re all sort of techno solution oriented. I think one thing that I want to think more about is, you know,
How are these technologies being used by people? What are their impacts? And I think that’s something as a society, I hope there’s more sort of conversation and thought.
So to kind of focus the conversation today, we’re going to kind of get a little existential, get a little abstract and philosophical. Hopefully we don’t lose you listener. But we’re going to focus on kind of three questions and kind of see where they take us. one of the questions that a lot of people have is, do humans do when we’re replaced by AI? So the other day I took a Waymo, which is a
self-driving car in lieu of taking an Uber. So yeah, are the implications there? Another huge question that people talk about is, is the development of or the pursuit of AGI an existential risk to the human species? We’ll talk about that. And then lastly, for the software engineers listening or just the technology builders, what responsibilities do we have as the builders of this technology?
Okay, so I guess we can dive into our first question. Does that sound good to you all? Okay, so yeah, so you know, it’s funny people, you know, my mom was telling me not too long ago that, you know, she’s worried that AI is going to replace all of our jobs. And I think, right, as like people in the field, we know that sort of
Not entirely true. Humans are still sort of required in a lot of roles. But it is interesting to think, you know, there certainly is job displacement. I think we see it with Waymo. So like, what do humans do? How is our society shaped as AI plays an increasingly greater role in our economy? Do you guys have any thoughts there?
Erika (02:40) Yeah, so I have a couple of thoughts. One is that like, this is not a new problem, right? like, AI is the new wave of technology, but like, massive shifts in technology happen, you know, cyclically. like, the last probably major one was like the internet. And then before that, like the invention of the personal computer. So like there come these like major technological
improvements and waves and like that fundamentally changes the way that we work. I’m still not entirely convinced that like AI is that level of shift, but you know, if we’re assuming that it is having that power and that effect, like I think the biggest thing is that people need to adapt and train and like learn how to use this technology.
Whenever you talk about education and training, there’s always the question of access. So some people do have access to this and some people don’t. A lot of the training or the prompting is in a certain language. So that affects who can and cannot.
Like use this effectively, but again, that’s not necessarily a new problem like You know computers cost money the internet’s not available everywhere. So Yeah, I mean the the biggest problem in my mind is not necessarily like can we adapt but like who will be able to adapt through like training and education and unfortunately like Yeah, it’s probably
Also not a new answer to that question.
Jonathan Tamsut (04:18) Yeah, know, was, you know, it’s interesting. Like I was talking to a family friend, he’s a lawyer and he was saying he doesn’t use AI at all. And I was thinking like, yeah, you know, he probably doesn’t need to until like, you know, it’s possible that some law firm, some competitor to him will use AI and they’ll start winning cases. Maybe they’ll get rid of a few lawyers and cut costs and start charging less.
that’s sort of a competitive advantage. And so it’s like, you know, I think these tools are powerful and people using them will have a competitive advantage. But yeah, I agree with you that in the short term, I don’t see, you know, human labor becoming completely obsolete, right?
Brittany Ellich (04:55) have some thoughts here as well. I agree that I don’t think that human labor is likely to become obsolete. I think that this period of time might potentially be considered like an industrial revolution type event, assuming AI actually, like an AI revolution, assuming AI does all the things that it is currently hyped up to do. There is, I am not a psychologist, but there is a…
psychological theory called the end of history illusion, where there’s this bias that people think that they’ve experienced this like significant growth within their lifetime, but underestimate the amount of future change that they are likely to experience. And I feel like that’s probably part of what’s going on here is like, there’s a lot of folks that look at AI and they’re like, this is it, this is the end, you know? And that’s just, that’s not likely to be the case. I think that
Like the industrial revolution where we went from lot of folks, you know, like twisting, twisting screws and stuff on an assembly line, we might be able to create some sort of manufacturing that.
dramatically increases the speed at which people can work, but it’s not like people stopped working after that. It just like adjusted to other things. So I think it’s pretty likely I can already tell within my own day-to-day job that, you know, AI makes a lot of things go faster. And that…
doesn’t necessarily mean that we’re not gonna have software engineers in the future. We’re just gonna be solving different levels of problems and different types of problems and able to get work done at a higher rate. And anybody who has ever been on a team that has any backlog, I feel like, knows that there’s no endless, there’s an endless amount of work that can and will be done. So I think we’re likely going to see like…
fewer items going to the backlog to die forever and being able to like accomplish things much faster, but not necessarily like we’re not going to stop doing them necessarily.
Erika (06:42) Yeah, I think there’s also the potential for quality of life improvements. I heard about this study about doctors using generative AI to help solve or help diagnose patients. And granted, this was a med school case study. So like,
the comparison was between doctors, you know, any number of years out of med school, diagnosing symptoms versus like AI. think, I don’t know, I don’t remember which model they used, but it might’ve been like chat GPT, like diagnosing the same description of symptoms. And like the LLM did better.
It was like, you know, accurate and like, I know, was something crazy, like 80 to 90 % of the diagnoses. And like compared to the doctors also did better, even though the doctors were allowed to use AI, like the same tooling, and they would get the response, you know, with the correct diagnosis, and then they would…
say, no, that’s not true, like I know better, and like discount whatever the model gave them, which is fair. Like I think it’s totally valid to be skeptical of these technologies before they’re like fully vetted. But, you know, I think that in my mind, that’s an example of like, hey, like, you know, are doctors going away? No, but.
Could we enable our doctors to be more effective in their diagnoses by, again, including this as a part of the training or use AI-assisted surgery to make it more accurate? I see potentially lifesaving use cases there. Yeah, so even outside of software, I can see improvements.
but not necessarily a replacement in all cases. Yeah, I think you’re right. In my mind, it’s probably more the mechanical things that we can automate. yeah, I mean, think also, I’m sure there are doctors out there who like those tough cases, but I don’t think anybody likes being wrong.
Like backlog cleanup, like, you know, these tough diagnoses. Like, I think if we see it as a way to like enable us to do better, yeah, it makes your job that much better.
Brittany Ellich (09:19) Yeah, my favorite way to look at AI is really as a quality of life improvement, like you said, Erika, because what you can do is use it to do all of the things that you don’t necessarily want to do or don’t love doing. Like, for example, I love writing. I hate editing. So I have AI do a lot of my editing for me, which is awesome. I like writing code. I hate writing tests. And so now I use AI to write all those tests. And it’s amazing how much faster I can move to get things done and not get in
by those things that I don’t love doing as much that I can offload to AI, which is cool.
Jonathan Tamsut (09:51) Yeah, and I think there’s like so much here because, you I think we’re inherently threatened by this thing that can come in and replace us, right? And I think when people have resistance to AI, I suspect that’s part of the reason. The other thing is, right, you know, obviously, like, you know, I have like a materialist view of the brain. I think that eventually we will have an AI that is just capable of everything we’re capable of.
And in that regime, it’s like, what even is work? Because then human labor is obsolete. So then what do we do with our time? And I think, who knows how close that is? It definitely doesn’t feel close. The other day I asked how many Rs are in the word strawberry to chat GPT and I got it wrong. So yeah, there seems to be…
limitations, but these models do… I mean, the pace at which these models have beat benchmarks is crazy. yeah, do wonder how much job displacement there’ll be in the short term. It might just be totally overblown, people talking about it. It’s be sort of a catastrophic class realignment, I don’t know.
Erika (10:55) Yeah, I also think that it’s a bit of a language, like vocabulary difference between maybe how we as technologists understand it and how like non technologists understand like AI artificial intelligence, because I think as technologists, we understand that there is a limitation. And like,
Because we’ve peeled back, we look behind the curtain, right? Like, you know, once you look at a model, once you kind of look at like what training data looks like, like how to improve on these things, you know, not to say that I like fully understand how these things work, but like, I at least know that it’s not magic. But like, I think the use of the word, like artificial intelligence implies that it’s like boundless. And like,
Like you said, like it still makes a lot of mistakes. And yeah, I actually really dislike that term and kind of try to not use it as much as I can. like, instead try to use things like, you know, generative, generative models, like LLMs, that kind of stuff that’s a little more specific and accurate, do I think what it’s actually doing.
Brittany Ellich (12:03) You learn it’s all just math. It’s just math all the way down.
Erika (12:06) Yeah.
Jonathan Tamsut (12:07) Yeah, a couple years, like maybe this was last year, I was listening to like Francois Chalet, who’s this like, I think he was involved in the creation of TensorFlow, I think. He’s an AI researcher and he was talking about the Arc AGI Prize, which is this like benchmark prize that has been created to try to really push, test the bounds of our current models. And, you know,
He was talking about, so there’s been different versions of the Arc AGI prize. I think the most, one of the versions has been sort of beaten by an AI model. Maybe like 03 beat it. It was able to basically score better than any human. And it’s sort of like, if you look at the challenges that they’re posting online, they’re just like moving blocks around and noticing patterns. And you know,
It’s like, it is weird because, you know, yeah, these LLMs, like they’re doing matrix multiplication and they’re kind of complex, you know, they’re finding complex associations, but it’s like, really begs the question, like, what is intelligence then? Because there are times when you use these models and you’re like, wow, like this is, that was really smart. I’m like really surprised. And then there’s times where, you the opposite is true. And,
I think that’s a really interesting question and one that I want to explore more. I don’t think anyone really knows the answer. Are these models just compressing large amounts of information? What is true understanding? I think the most recent version of Arc AGI is not. I think frontier models are getting low single digits in it, so it’ll be interesting to see if that’s beaten.
this year or next.
But yeah, maybe we just need UBI, which stands for universal basic income.
Brittany Ellich (13:43) You’d be okay with that. What would you be doing if you were not a software engineer? That’s your dream job.
Jonathan Tamsut (13:47) Yeah. Dream job? Probably just a life of leisure, a man of leisure. I would be, you know, pursuing hobbies and then getting bored of them after three months and traveling the world. That’s probably what I’d be doing.
Brittany Ellich (14:01) sounds incredible. I think I would open a plant store. I want to learn about plants. ⁓ And I just never have the time because there’s a million other things that you can learn when you can learn things that actually like make you money. So what about you Erika?
Jonathan Tamsut (14:06) cool.
Erika (14:16) I have so many answers to this question. There’s so many things that I’d want to do. I feel like I still code though. Like I feel like I might still write like open source software. Like even if I don’t get paid for it. Now.
Brittany Ellich (14:31) Makes sense. This is why we’re overcommitted. It’s because we’re just…
Erika (14:33) Yeah.
Jonathan Tamsut (14:34) Mm-hmm.
Erika (14:36) Hmm?
Jonathan Tamsut (14:37) Kind of, yeah, this is, I actually have recently wanted to get into sewing and like garment making. And like, I wanna like hem my own pants and make my own clothes. That’s something that I…
I would definitely learn that if I had unlimited free time.
Erika (14:50) This is kind of funny because it’s making me think of like the, you’ve seen like the, the, like the tropes of like, you know, software engineers wanting to like become farmers. It’s like, like tend towards like, like, if I’m not like writing code, like I’m going to do something very manual, like sewing, gardening, like, but not things that could be done by AI. so there you go.
Jonathan Tamsut (14:59) Mm-hmm.
Erika (15:14) return to the earth. Well, that sounds morbid.
Jonathan Tamsut (15:15) I
have returned
Erika (15:18) Not in that way. Salt of the earth. That’s probably more what I meant.
Brittany Ellich (15:20) you
Makes sense.
Jonathan Tamsut (15:24) I mean, it’s like this paradox, right? Where you have these models that can do like PhD level mathematics, but like couldn’t bake a cake or, Yeah, so it’s, it isn’t, it is interesting.
Brittany Ellich (15:34) Yeah.
Makes me think of Interstellar, where they got to that point where they’re like, we don’t need any more engineers, we need more farmers. And they were like trying to get their kids to go into farming instead of.
Erika (15:46) Yeah. You know where like a weird area for me is, like, is this idea of introducing AI in education. And I was telling you guys that I saw like the co-founder of OpenAI.
Am I going to get this right? Yeah, the co-founder of OpenAI at a conference a couple of weeks ago, and she was in conversation. part of the conversation was about using Claude as a teacher. Oh, the idea was actually AI will help.
with access to education because anybody can get on a computer and like use like Claude as a teacher, as a tutor, and it’ll like tailor make the tutoring to you.
Which, you know, I do use assisted LLM technology to learn things and I find it very valuable. But I don’t think that teaching is necessarily a science that can be quantified or like.
you know, directed in that way. And I also think that like part of teaching is like teaching you how to be a good human. And like, I don’t think that AI can do that. Like, I’m not going to like put a robot in a classroom and be like, okay, cool. Now like all the kids are gonna learn their ABCs and like.
We’re good, stepping away now. Like, no, part of that is like, you also have to learn social skills and how to interact with people. And like, I really don’t think that that’s a place that like AI technology can replace.
Jonathan Tamsut (17:35) It’s absolutely destroyed computer science assessment because I I think about all my assignments in undergrad, all my CS assignments. AI could do them perfectly. But I also wonder, as people who are parenting small children, are you at all worried about the careers of your, the future careers of your children or how, you know?
or even how education is going to sort
Maybe this is just like a perennial concern and none of us know what the future holds, so.
Brittany Ellich (18:07) I feel like that’s thing, yeah, that most parents at some point worry about for their kids, regardless of what period of time you’re in. I do, I am a little worried because I think that I’m likely to push my kids towards, know, my kids towards like software engineering or something like that. And I don’t know what that’s going to look like by then, but, and yeah, the concern about like how much education is going to change.
You know, I think there’s a lot of stuff that has to be reinvented now. Like you said, like assessments in general don’t really make as much sense anymore. Writing essays is a very different thing than it was when I was a kid. yeah, worried, but I mean, there’s a million other things to worry about too. So, there’s other things that are higher on my list right now, I think. Yeah, yeah.
Jonathan Tamsut (18:44) Hmm.
Yeah, you gotta pick your battles, yeah.
Erika (18:51) Yeah, I don’t think I’m worried about like there being jobs available for my daughter when she grows up. I think it’s more this idea that like, like I’m gonna, again, like I’m gonna give all the kids an iPad and they’re gonna like learn from AI instead of like having an actual teacher. Like that worries me.
And yeah, I mean, I think partially for the social skills side of it, and then also because I don’t think that AI is accurate enough to like teach our children. So yeah, I think that that part of the education system, I mean, I have no idea, like I’m removed enough from the school system now to like not know.
if AI is being used in classrooms and how much and how currently. But I think also my philosophy as a parent is like, school is only part of what you learn and like a lot of what you have to do is teach your kids at home. So.
Brittany Ellich (19:49) think there’s a lot of excitement too around the ability, the capabilities of like introducing AI in the classroom too. Cause our current, or at least when I was growing up, the methodology was very much a one size fits all approach. You’ve got 30 kids in a class and this is how we’re going to teach this curriculum. Whereas I think that, you know, if you have AI guiding things, maybe there’s ability to customize things a little bit more than a teacher would be able to do. You don’t have a teacher lead.
this overall curriculum, then customize the content for a way that the kids understand things, which I think would be, I think that’s kind of exciting and maybe, you know, depends on the direction that things go. This could either be really terrible or not. And it’s probably gonna be somewhere in the middle.
Erika (20:30) Yeah, I guess I also worry a little bit about the loss of critical thinking where like, I think we talked about this last week with like part of this, part of being a responsible user of AI is recognizing that like you have an agent role as like still doing the thinking. But like that comes from years of like not having AI.
So, like, we’ve always had to do the thinking. But I think, like, there’s a risk there where, like, if things become too easy and, you’re always given the answer, like, you don’t have that muscle to be like, but is this answer right? So, yeah, that also does worry me a little bit. But I think my…
My personal offset is like probably teaching her about some of these limits. And like, you know, as long as she’ll listen to me being like, you know, don’t trust it, question everything, be critical.
Brittany Ellich (21:26) models too we can apply to this that for existing things too where technology has improved dramatically in one spot and you you still have to understand like the underlying building blocks. The example I always come back to right now when looking at AI is graphing calculators like if you just hand somebody a graphing calculator and you’re like here do some calculus that’s not I mean they could do it but they probably can’t explain it don’t know how to like apply it to things or it’s the same thing with like programming and AI.
You know, you can hand somebody, co-pile it and say, here, go do this. And yeah, they could build it, but like, could they, you know, infer differences about it? Like if they don’t understand like the underlying building blocks of it, could they change it? Could they fix bugs? I don’t, probably not as well as somebody who does have those building blocks that they’ve built up of understanding programming.
Jonathan Tamsut (22:12) Yeah, is. mean, know, vibe coding is just so seductive sometimes. You just don’t want to think. You just want to like throw things at an LLM, run it, see if it works and just kind of iterate on that until all the tests go green. You know, it is interesting, you know, to see how that changes. mean, and you know, I think, right, you know, it’s, I think there was a psychological thing where like you’re like,
relying on this tool now and you’re pushing more and more thought into this tool and I think naturally the more answers it gives you that are correct the more you trust it. And so yeah what are what are what are sort of the drawbacks there? I mean like sometimes it’s good to go the hard way and like figure things out but yeah.
Brittany Ellich (22:49) I have a prediction there. I think that there’s like two types of software engineers. There’s the people that are vibe coding and just like hitting the tab key and just getting through stuff. And then there are people that are using it and like making themselves more productive and like implementing it in their workflows and still like thinking critically about the changes that they’re solving. And like my prediction is that one of those sets is going to continue to be a software engineer.
Jonathan Tamsut (22:54) Mm-hmm.
Mm-hmm.
Brittany Ellich (23:13) in the future, whereas the others might not be. They might be doing something else. Maybe they will be, but maybe that’s a spicy opinion.
Erika (23:17) Yeah. No,
Jonathan Tamsut (23:20) Hmm
Erika (23:20) I think you’re spot on. And I think this is true even without AI. Like so many of the answers to like, how do I fill in the blank for any like software engineering question have been readily available like through Google for so long. But like I have seen, like you said, two different types of people who will accept the answer without trying to understand it.
or the other side, really dig deep, like understand the tooling, look up the rough, look up the code, like understand how things are working. And yeah, I think you’re totally right. think, you know, I think there will probably always be like a place for the people who kind of want to vibe it out and like do the least amount of required work.
But yeah, I think that the people who will continue pushing technology forward in a meaningful and probably professional way are the second type who still are the drivers who really understand how to get the most out of this.
Jonathan Tamsut (24:22) Yeah, it is kind of depressing in a dystopian way to just think of people who are just hitting tab. wow, that, And yeah, think, right, mean, I think, right, a lot of what we do is frankly not, is kind of uninteresting, right? Doing glue code or just boilerplate. There’s not, sometimes there’s just not a ton that’s interesting to understand, but like.
There’s maybe a kernel of like interesting business logic or optimization that you can focus on that AI maybe couldn’t do or maybe it can do, but you just want to do it yourself.
one question people ask is, does AI pose?
existential risk to the human species. And I think, you know, there’s guys like Eliezer, Yudekowski, there’s a lot of people talk about like alignment. And I think that the existential part is interesting and honestly like maybe we should be thinking about it because like, I guess if there’s like a 1 % chance that AI, you know, creates some sort of mass genocide, we should definitely try to…
to not let that happen. I do think, right, like, you know, if we remove the word existential, like what type of risks does AI have? We all know AI systems, you know, are difficult to explain. There’s bias in them. There’s now an ability to create realistic looking content, both text, video and image that can be used to polarize.
an electorate in our population, there’s an environmental cost to training these models. I guess, yeah, you know, we so rarely talk about, you know, how we can mitigate these negative consequences. So I figured this would be a good time to talk about that. So is there any question in particular? I think the alignment question to me is interesting, but was there any question in particular that sort of
stood out to you all.
Erika (26:12) Could you give a brief description of what alignment is?
Jonathan Tamsut (26:15) Totally. So, um…
Yeah, basically, how do you train these models to be aligned with human values? And one kind of anecdote that you’ll hear a lot of people talk about, like Nick Bostrom wrote this book called Superintelligence, and in it he talked about this theoretical example where you tell an AI to build as many paper clips.
as it can, that’s a relatively innocuous ask. But eventually the AI realizes that humans are consuming resources that could otherwise be used to build paper clips, so it exterminates the human homo sapiens. And so I think there’s two interesting questions there. It’s like, what are our values? And then how do you actually encode them? And guess maybe the third question is, do you tell?
know, test that our values are actually encoded in these AI systems. You know, right now there’s, you know, bias in all of these AI systems. I mean, all data sort of, I think inherently has a bias. And so, yeah, like how, yeah, how do we deal with that? don’t know if this is an interesting topic to you all.
Brittany Ellich (27:25) really solving the world’s problems
here.
Jonathan Tamsut (27:26) Yeah, it’s really
tough. I think there’s a lot of people at Anthropic and all these companies who are researching this.
Erika (27:32) So I guess, like, I may be oversimplifying this. But like, I think like in order for AI to pose an existential threat, like it has to be in the place to do that. So like, we have to somehow enable it to take those kinds of actions. Like, to…
Yeah, be lethal in some way. like, again, I’m probably oversimplifying this, but I kind of think of it a similar way of like, like, are we weaponizing AI? Like, you know, and if so, then like, we do need to put safeties in place, but like, you’re never going to make it entirely safe if you use AI as a weapon, right? Like, it’s like, if you’re making a gun,
You put safeties on the gun, you put the gun in a box. Those are things that you can do to make it more safe, but at the end of the day, it’s a weapon. It can kill people. So I think the only way that we can make AI not kill people is to not put it in that place, not give it those abilities. But I feel like I’m probably missing something of like…
you know, maybe it already is, and maybe there’s like a doorway for it to, like, I guess the edge case is like, I don’t know, using, like right now I’m thinking, like all I’m thinking is just like LLMs, words don’t necessarily kill people unless it’s like, like I, there are like those edge cases, but I think.
Jonathan Tamsut (29:00) Mm-hmm
Erika (29:06) Yeah, that’s maybe a different rabbit hole.
Brittany Ellich (29:09) Yeah, this is tough. think, I don’t think it would be possible to build any AI model without any bias because humans are inherently biased and it’s being trained on human generated text mostly. So I think you have to assume that these are all biased things. And I think, you know, with
obvious exceptions of somehow integrating it into a weapon system. There’s a lot of ways that it can still have a huge impact on the world. Like if people are like, well, we’re going to start use AI as a judge. like, if the judge is already trained on, you know, years and years of human bias, we already know that there’s a lot of bias in our judicial system. And that could have a pretty tangible impact on people. And I don’t know. I think it’s…
I think it’s easy for software engineers to talk ourselves out of how these things can have real world impacts, because we’re just like, I’m just building this API. Look at this API do. And then how people are using it could be very different. I don’t know how to fix that. Luckily, I don’t do anything like that, I say.
But, you know, yeah, that’s a hard problem. It’s a hard thing to think about how it’s being actually applied.
Jonathan Tamsut (30:23) Yeah, I mean, this is a difficult concept. And I do just want to point out, I appreciate us just coming here and just trying to grapple with this topic. And I have no idea what I’m talking about. And it’s kind fun to be… So one example I hear is like, okay, let’s say you program an AI. And so there’s this idea of super intelligence. So the idea like, okay, we create an AI that has human level intelligence.
In theory, if it’s just like code and hardware, and I’m being very hand wavy here, it should be able to just be like infinitely more intelligent. Because I don’t know, you sort of like, you you don’t have to buy this argument, but like the argument is like, we can just like horizontally scale intelligence. So we’ll have a system that’s like way smarter than us. Already, LLMs have cognitive capabilities we don’t have just by the very nature of like, they can remember all of the internet. You know, we’re, you know, quote him.
quote unquote, remember. So it’s like, okay, so let’s say we created an AI that was way smarter than us and could therefore manipulate us in subtle ways that we don’t even imagine. Yeah, maybe initially it’s just software running on a computer, but it could just lie dormant, subtly manipulate us, get us to give it some physical embodiment, and then game over. So there are interesting…
edge cases. I mean, there are people like Eliezer Yudekowski, he writes this blog, Less Wrong, who are really focused on this. There’s also like, have you guys heard of the Zizian cult? You guys hear about them in the news? They’re like this like tech, San Francisco tech cult. They ended up like murdering a few people. But they, think like one of their central beliefs was that like,
AI poses an existential risk to us. Definitely now, that feels far-fetched. It feels like you’re basically running a Python program that multiplies matrices together, and then it terminates. But yeah, think the bias question is interesting.
I think the surveillance question is also interesting. think these AI tools allow you to surveil your, allow governments to surveil at levels that are just unprecedented. So yeah, that’s it.
Erika (32:29) Yeah, I honestly think, yeah, in my mind it comes down to like, I’m less worried about the AI itself and more worried about like the humans in charge of the AI. ⁓
Jonathan Tamsut (32:39) Yeah.
Brittany Ellich (32:40) I do feel like there needs to be, at some point in the near future, there needs to be some sort of moral or ethical code for software engineers. I mean, we have the same thing for doctors where they have to follow this oath to say, I will not harm people. And I feel like we might eventually need that for software engineers too, because…
Erika (32:48) Mm.
Brittany Ellich (32:58) We’re doing stuff that has a huge impact on the world. I know that’s probably a really unpopular idea for any employer out there and I might never get hired anywhere else ever again after saying that publicly. But I think that, you know, there’s a responsibility that we do have to take for the systems that we’re building that have a real impact on human lives.
Jonathan Tamsut (33:17) Yeah.
Erika (33:17) 100 % agree with that. And yeah, I will be the first one to advocate for more tech regulations too, as annoying as it is to build for GDPR. I feel like it’s good pain to feel because yeah, it’s like, this is important. Data sovereignty is important. And like…
Yeah, we need to put protections in place because the technology itself is open, right? Like you can’t prevent people from using it in a certain way, but you can regulate how it’s built. Yeah, so I think that’s a very good point.
Jonathan Tamsut (33:54) And I think one thing that I really struggle with is what are my values? I think there are obvious things like I think people deserve a right to privacy. I don’t think there should be discriminatory bias. We shouldn’t be killing people. But I think there are other more hairy questions. I’m not even sure where I stand. And I think this is a topic that is…
I want to spend more time thinking about. it’s also, you know, I mean, it’s difficult, right, in our sort of capitalist system. You know, there’s this idea of moral hazard where corporations don’t have to, they don’t internalize the negative consequences of their actions and therefore they aren’t incentivized to prevent them. At least not directly, maybe years down the road there’ll be some congressional hearing where they’ll get outed, but I don’t even know if they care about that.
So yeah, I do think it’s good that people are talking about this more. I think it’s something that I want to form more of an opinion on.
Brittany Ellich (34:53) think we should continue talking about this also in future episodes. I’d love to talk about what a guideline would look like if you were to write one. It’s probably different for every single person that writes some sort of oath or something like that. It’d be hard to come to an agreement on one, but.
Jonathan Tamsut (34:56) Yeah.
Yeah.
Erika (35:07) I feel like I need to read more science fiction to get into this mindset that AI is going to take over the world.
Brittany Ellich (35:12) Yeah, same. Yeah, it feels very science
fiction. And now I really want to go read super intelligence. So.
Erika (35:19) iRobot all of the Isaac Asimov back catalog
Jonathan Tamsut (35:23) Yeah.
Okay. ⁓ Now for the sort of canned segment part of the podcast. We’re gonna go around and talk about a learning resource, book, article, course, et cetera, that we have consumed or interact.
with recently that we really liked. So, Brittany, do you wanna start us off? Popcorn Brittany.
Brittany Ellich (35:45) Yes, I’m glad that I get to go first because I was wondering if Erika’s was going to be the same one since we’re both in the same book club. But my recommendation right now is the Thinking in Systems book, which I just recently finished and is easily one of the top five books I will.
continue recommending for everybody that is a software engineer going forward because it’s very approachable. You don’t even have to be a software engineer. You can really be in any field because it’s actually not super technical, which I did not expect going into it. But it’s a good way to sort of open your mind up to thinking about, you know, thinking about systems, be they software or otherwise in different ways. And
I really enjoyed it and I’m looking forward to, you know, recapping it further on the internet, most likely.
Erika, do you want to go next?
Erika (36:32) Well, definitely plus one to thinking in systems. I will share another resource that I have found fun recently is last week I was doing some like learning about copilot and how to use it and
specifically like how to use an MCP server and So did some kind of like tutorials on like building your own MCP server and I thought those were really fun because I think like at the Yeah, sort of start of last week. I was like, I like heard of this concept and
don’t really know what’s inside of it, yeah, Claude Desktop has a tutorial on building an MCP server. And then I also listen to a podcast, I think it’s Latent Space, I don’t listen to this podcast regularly, but they had the co-founders of the MCP, I was gonna say MCP protocol, but that’s redundant. The co-creators of MCP.
on as guests and it was fun hearing their inspiration and how they built it so that’s also a rec.
Jonathan Tamsut (37:51) Yeah, no, that’s what you’re saying.
Cool, yeah, my recommendation is kind of similar to Erika’s. So there’s this podcast called, I think it’s called the Dworkesh Patel podcast. That might not be the name, but Dworkesh Patel is the host and he wrote a book called, I think it’s called like The Scaling Era. And it’s kind of just him synthesizing a…
things he’s talked about on the podcast. I haven’t finished the book, but it’s good to talk about a lot of stuff we talked about, like what an LLM is, how they’re evaluated, different benchmarks, AI safety and alignment. So yeah, it’s pretty interesting if you’re interested in that topic, I recommend.
Okay, so thank you all for tuning in to Overcommitted. If you like what you hear, please do follow us, subscribe, or do whatever you like to do on the podcast app of your choice. Check us out on Blue Sky, our social media app of choice. Most of all, share with your friends. Until next week.