Jonathan Tamsut (00:00) Hey everyone, welcome to the overcommitted podcast, the number one podcast in the world according to our moms. We discuss code commits, our personal commits and some stuff in between. I’m your host, Jonathan Tamsut, joined by…
Brittany Ellich (00:14) I’m Brittany Ellich.
Bethany (00:15) Hey, I’m Bethany.
Erika (00:16) and I’m Erica.
Jonathan Tamsut (00:17) We are software engineers who initially met as a new hire group at GitHub and found a common interest in continuous learning and building interesting projects. We continue to meet and share our learning experiences and discuss our lives as developers, whether you’re pushing code or taking on new challenges, we’re happy you’re listening. Today’s episode focuses on AI 2027, a scenario prediction published by a nonprofit called the AI Futures Project. Our goal today is just to discuss
the document and talk about the progress of AI and our fears and sort of predictions for it. So as I mentioned, we’re gonna be talking about a article that was published I think earlier this year called AI 2027. Before we begin, I just wanna kind of summarize it. So.
As I mentioned before, it was published by a nonprofit research group called the AI Futures Project. It was the people who published it were former employees at OpenAI. There’s also someone who has experience forecasting. And they make some pretty bold predictions. So we’ll throw a link to the article in the show notes, but.
Basically, the gist of it is that they believe that we’re going to, or I guess they predict with some probability, that there’s just going to be a takeoff in AI progress. And I think what’s cool about this article is they actually walk you through it sort of step by step. So kind of the key ingredients that they think are important here are
know, we’re gonna scale AI compute. And so we’ll talk about sort of the role of compute in these models. And they believe that the way we’re gonna get to this sort of artificial super intelligence, ASI, is through training AI models that act as AI researchers. So there’s this sort of recursive self-improvement that occurs.
There’s some war gaming in this article. So they talk a lot about an AI arms race with China. So, you know, the United States and China are going to be in this, in this report are going to be competing to develop artificial super intelligence. That’s sort of not too surprising considering, I think AI could definitely be used as an instrument of war. There’s
One other, I think, interesting thing about this is they break down AI progress into sort of two parts. One is compute, so just getting bigger computers, training bigger models. And then they also talk about sort of algorithmic improvements. So how can sort of the underlying algorithms improve the performance of these models? We’ll talk a little bit about, I’m sure we’ll mention the history of sort of algorithmic improvements and stuff like that.
And they also, another huge sort of strain or topic in this article is alignment. We’ll talk about that. But how do you get an AI to share our values? And there’s two endings and I won’t, maybe we’ll spoil them later, but there’s two endings of the slowdown and the race. And basically they sort of describe two different scenarios and one in which.
we kind of slow down our AI progress and focus on safety, the second in which we kind of carry on and try to out-compete China.
develop a superintelligence that may or may not.
So with that, hopefully that gives you little taste of what we’re going to talk about. Very lighthearted, fun topics today. So I guess I’ll start off. Was there anything that stood out to you guys in this article that you found particularly interesting or anything that when you reading, your mind was blown that you hadn’t really thought about before?
Bethany (03:38) I think personally it was interesting them coming at it at an angle of these agents are models are training themselves and they’re really coming at it from that perspective of a researcher and making the ideal researcher. And then through those parameters, these models that are training could have
differing goals from future models or from the humans that are behind it. So I did think that was an interesting prediction or strategy that AI research might take. One thing I did notice was there was a, they mentioned AI models generating content to train other models on, which I feel like is just a big.
no-no in AI like training. So I was interested in that there were kind of a little blase about it, Kevin, the expertise in the group that wrote this, but maybe I missed something around that.
Erika (04:28) Yeah, I think I’m always very skeptical of any future predictions. But I did find this a valuable read for understanding some of the potential risks. this is kind of like a worst case scenario. They do give.
You know, they say like, you the predictions that we make have been supported by like research and, you know, they’re not like coming out of nowhere, like they’re all possible, but it’s a very aggressive timeline. And it’s also like dependent on a lot of things happening. I think the most important takeaway of this for me is, well, two things. One,
how important it is for regulation in the AI space. A lot of the worst case outcomes here, specifically around the AI intentionally misleading or having nefarious goals.
which apparently is already shown to be happening in some models. That was described as something that’s an outcome of prioritizing speed of development over safety. So kind of what you were saying, Bethany, adopting some of these not so great practices and maybe they wouldn’t do that, but there’s nothing stopping somebody else from doing that.
Yeah, I mean, you can train your model however you want these days. You can pull data from really anywhere and who’s going to stop you. And what’s to say, like, you have to prove, you know, how you got that data or like what the training or what the intent is. So, yeah, I mean, honestly, like, I, kind of think that like the AI industry needs to be regulated to a similar degree of financial industry where you have to be able to
prove every transaction, prove every, like, you know, every inference, maybe not like to that exact degree, but like, you know, I want to see the intent, I want to see the inputs, I want to see the like thinking, because yeah, I think this is, like this is something that
could happen if we don’t take steps now to ensure that we’re developing this safely. I think the exciting part is like, wow, this stuff is actually possible. This is super cool on a theoretical level if you have this doing the right things, like these sort of like…
They call it like geniuses in a data center. Like how cool is that to like have that brain power if it’s doing the right thing. But yeah, it’s also like super important to regulate it and like make sure that it’s progressing in the right direction and that we’re like valuing the right things as we go along. And then one more thing. Sort of like on the exciting part, like it also got me thinking of like
you know, this idea of like, is AI gonna take my job? Which I think we’re gonna talk about a little bit more later, but, you know, they kind of describe like, yes, there will be job loss of like, of in the software engineer world, like writing the actual code, but will become more like managers of agents. And like, I was thinking, I was like, well, it’s like a little bit sad because I do like writing code.
But mostly I want to be involved in the process and I want to be building things and building cool things. And I don’t think that part’s going to go away. yeah, I’m trying to adopt that sort of mindset shift of like, OK, it’s OK if what I’m physically doing in my job is different as long as the outcome is still the same.
So kind of coming to terms with that.
Brittany Ellich (08:08) I had a lot of thoughts.
reading this as well. There’s, I feel like this could be like a four hour long podcast episode of us talking about this. so much. but just briefly, the things that I think are the most interesting from reading this is the idea that we’re going to be training models that will train the next generation of models. I don’t know if this is, I’m not really, I’m not an AI researcher. I don’t know this is already how things work where you train a model and that model is used to train the next level of models. But it seems like it’s somewhat of an
of an inevitability that eventually we will get to a point where we are building models that are smarter than humans and indeed smarter than like even the smartest of smart humans, which is really interesting. But I also think that there’s a lot of hubris associating all of these human traits with these models that we don’t have the capability to understand. Like there’s this big understanding that they’re going to be just as selfish and like self-interested as humans. And so they’re going to want to try to, you know, break down the entire world.
to
build paper clips or whatever it is that they’ve been given as a goal. And I’m curious. I think that that is, you know, I think that’s a big assumption that it’s worth ascribing because we can’t necessarily, we don’t know what they’re going to do, obviously. This is a future prediction, but I think that, you know, we’re saying that this is going to be, they’re going to be very human-like. And I don’t know that we can necessarily say that for sure.
Bethany (09:25) I think…
Jonathan Tamsut (09:25) Yeah.
Bethany (09:26) I think those are really, really interesting points. And it got me thinking that like, like the idea of goals. Initially, when I heard about this paper, I was like, I, like, I cannot envision a computer system having goals or having like the desire to go after something. But I do think like to kind of call out to our podcast about thinking in systems, like
It is a system, so a system will have goals, whether that’s something that is explicitly wanted by the individuals within the system or is given to the system. I think that’s kind of where the nuance comes from. But it is a really interesting way of looking at this prediction in paper. But I think really interesting thoughts on the paper as a whole.
Jonathan Tamsut (10:11) And I’m, to kind of respond to what all of you said, I mean, I think, so, you know, I think one interesting point that was made is like, training these models is like training a dog, right? I think they drew that analogy because like, you’re kind of looking at the output behavior, but you don’t truly know what it’s thinking. And…
they introduced this term called like mechanistic interpretability, right? And so this is something we don’t currently have and like no one knows how to do this. So this is the idea of like, you know, can you go into these models and understand them truly like from a step-by-step point? And like there’s just, that’s like an unsolved research problem, which is pretty scary because, you know, they talk about these models becoming smart, potentially smarter than us, and they can lie and they can manipulate. And that’s pretty terrifying.
I think, yeah, think the fact that these models have goals is just, I mean, it’s, I think it’s like sort of a, kind of a deep mystery in the field, right? You know, have these, these transformer, you know, these models are based off this like neural network architecture called transformers that do like next token prediction. And that’s sort of the loss that they’re trained on. And like from that emergent property, you get these.
you know, these goals and this sort of like deeper understanding. So I think that’s like really interesting. I’m like, frankly, just pretty pessimistic about, if this, you know, I think, you know, in this article, I think a big unknown is like, there has to be like pretty significant technological advances in order for these agents. I mean, at least, you know,
A lot of the research at these research labs at OpenAI and Anthropic is not public for safety and competition reasons. it does feel like we’re far away from this. Obviously, this article claims that this is all going to be around in 2027. So guess, who knows? But there does seem to be a little bit of technological handwaving. The other thing I do want to mention, and we’ll
people posted in the show notes is, know, one thing that this is sort of predicated on is there’s a paper published in 2020 by some open AI researchers about scaling laws. And basically there’s this like observation that the training loss, so basically like the model performance just improves the more parameters these models have. So basically these models get better the bigger they are. And people think that this is gonna go on indefinitely, you know, there’s
no reason to believe that this won’t stop. And so there’s this kind of idea in the field that’s talked about, it’s talked about in this paper and talked about elsewhere that like, we actually have the knowledge to build artificial superintelligence now, we just need to get bigger models, which is interesting and we’ll see. And I think that’s sort of been disproven a little bit with chain of thought, but that’s sort of an interesting idea here.
Erika (12:48) Yeah, I guess the other thought that I have related to what you said is like these technologies only have as much power as we allow them. Like it’s similar to like, you know, allowing human power or human intervention. So like giving someone access, elevating them to a place of power or leadership.
You know, that needs to be something that we agree on and that we allow. But like once you do, you you have to deal with whatever the consequences are. So like, we’re sort of in this transition period of like crazy growth within the AI industry. And we’re sort of like giving it access to everything. Like I think of like MCP, which is super cool. But like,
Also, it’s very permissive and like not really locked down at all. So some of this stuff like, yeah, giving it access to, you know, a system or a network or something like, yeah, if you kind of like carte blanche, give it access to everything. Okay. Then you have to deal with the consequences. but again, like if you take maybe a more conservative approach or, know, like
give it access to a more limited scope or take a sort of more skeptical approach of like, okay, well, you know, yeah, like I know that it has this potential, but like, you know, I wanna like think through the consequences and potential implications of like using it in this maybe potentially like high risk scenario. Like, yeah, I think that.
we need to also like think through that. But it’s, in my mind, it’s not really any different than like electing somebody to a political position or like elevating someone to a position of power within a company. Like you’re giving them power and like that, that’s a choice. And yeah, so like, but like you can also take it away and not saying you’re gonna be in the same place you were before you did that, but.
Like there are actions, corrective actions you can take on the back end if you end up making a mistake too.
Jonathan Tamsut (14:47) Hmm.
Bethany (14:48) And I think a lot of that assumes like people purposely putting AI into various experiences. So like, knowingly setting up MCP servers or whatnot. I think there is something to be said to you though about, I mean, even prior to AI, there’s computer worms that will like infect your network, infect your devices and stuff. And they’re, they’re not intelligent at all, but they can spread like nothing else. And so if you add some intelligence to that, I wonder if that.
adds complexity to the situation as well and makes it harder to reverse.
Erika (15:17) Yeah, that’s very true. like, I feel like with any technological development, you have to think of what will happen if this gets into the hands of bad actors.
Jonathan Tamsut (15:26) Yeah. And to your prayer, know, one question I think I do have, and you know, I think this is like, no one knows the answer to this is like, is this reversible? I mean, if there is sort of this sentient superhuman system that is far smarter than any human can coordinate with sort of infinite clones of itself. Like, you know, who’s in charge at that point? If this has, if this has goals that are distinct from human goals, like
You know, if this is out in the wild, maybe, you know, it has some type of physical embodiment. It’s in a robot. mean, it’s like, well, at that point, I don’t know if humans will be able to be like, okay, AI, like go back into the data center. You know, it might be like, no, I don’t want to. So I think, I think that is, I mean, I think that is scary. Hopefully far fetched, but I mean, this is, this paper is terrifying. This paper is absolutely terrifying. I don’t know if you guys felt terrified reading this.
But it’s pretty scary.
Brittany Ellich (16:19) Okay, I do think that there’s an aspect of it that is very scary, for sure. Especially if you take it at face value and you’re like, these things are potentially gonna happen. But I mean, you could write that about anything. could be, Yellowstone could blow up tomorrow. And you could write an equally fear-mongering paper about that. Not that I’m saying that this is just fear-mongering, but there are a couple things in the paper that give me a little bit of pause. Like for example, the discussion about the…
political implications on the slowdown side where they say, whoever is vice president in 2027 is automatically going to be elected to president in 2028 because of their impact on AI. I’m like, OK, we already know who this person is. This was released two months ago. We know who like this elephant in the room vice president is. We know this person already has a lot of ties to like large tech companies.
And it makes me, tried to Google like, okay, who is funding AI Future Project? Like, is there something else going on here to where?
Somebody might be trying to put this out there into the world as some sort of like political maneuvering stunt as well. So.
Erika (17:23) Did you find anything?
Brittany Ellich (17:25) No, I couldn’t find anything worthwhile. think it looks like it’s mostly funded by Google, but yeah, TBD. But I feel like that in particular gave me a little bit of pause. I’m like, okay, who is trying to scare everybody into thinking that the government is gonna need to step into AI? Yeah.
Erika (17:43) Hmm.
Jonathan Tamsut (17:44) I should note, JD Vance did publicly claim he read this paper.
Brittany Ellich (17:47) Wow.
Jonathan Tamsut (17:48) Yeah, mean, you know, it is, yeah, the political aspect of it is pretty fascinating, right? Because, I mean, this is such a powerful technology will become nationalized. has the, you know, it’ll, I mean, if it sort of bears out in any way that is similar to what’s outlined in this paper, it’ll have just profound implications to, you know.
our society. I mean, I think even like when you think about software, like I don’t even know. I mean, if this pans out, I don’t even know if software will exist in its like current form. Like maybe we won’t have user interfaces. It’ll just be like the single, we’ll have like an agent that can book us flights and do everything we do sort of, know, kind of manually. Yeah, so it’s pretty.
Brittany Ellich (18:37) Yeah, the other thing that I thought was interesting is how the average person’s life was described through this and how most people are still gonna be using AI for entertainment. There’s gonna be some sort of UBI situation because I mean.
You know, there would have to be something, I guess, if there’s nobody that’s working. Maybe not necessarily. I think there was also a mention that like the Dow would be at one million, which just to me screams of like hyperinflation and like major economic problems.
But yeah, think one thing that was mentioned was like, yeah, you’ll be able to create like a AAA game with agents in a month or something like that. That’s like highly customized to you and all this stuff. like, it sounds like the idea is that like people are, I guess, are just going to be entertained by AI and by like all of the technology and we’ll just be chilling, not working, which is also kind of an odd perspective.
Bethany (19:33) Sounds like someone watched WALL-E.
Erika (19:35) Yeah, I know and like I feel like a little bit of that aspect is misunderstanding how humans work fundamentally. I really have a hard time believing that like we as humans will operate fundamentally differently. I do believe that there’s like more potential unlocked by AI. Like I see this as like, you know,
sort of logical next step of electronic technology from the internet, Like news consumption as a single viewpoint, right? Like 50 years ago versus today, there’s so much more news that you can consume on a second by second basis. And like AI…
It’s kind of a silly application of that, but yes, AI may unlock even more potential for more information, more content. But at the end of the day, how much news can one person consume? We are still a limiting factor of, I can only read so much, I can only take in so much information. And I also just really have a hard time believing that
we can all sit around and be idle and be happy and functioning. Like I truly believe that like we need to be like working. I don’t know if that means like we all turn into gardeners or what, like I don’t know. Like maybe we all find something more like close to the earth that we do as opposed to like working on technology. But yeah, I really don’t, I don’t believe that that’s going to happen.
Yeah, and I kind of feel like the claims that that will happen are sort of misunderstanding how humans work fundamentally.
Brittany Ellich (21:26) I think it’s basically every software engineer’s dream that they want to go live in the woods though and start a farm. So maybe this is finally going to become a reality for some people. I think the news aspect is a very good point though that you bring up. We already know that the media can manipulate people. And when we have AI controlling the media, it can control a narrative and control the way that people act. so you think you can just unplug it, but…
If AI is also controlling the perspective people have of AI, then we wouldn’t.
Bethany (21:58) Yeah, and I think that’s, I mean, all of this has been happening for years just due to social media and like various algorithms posing different or like skewing viewpoints and such. So I don’t think this is necessarily anything new. It’s just more automated. So I guess it makes it easier to do.
Jonathan Tamsut (22:17) And to, well, Bethany, to your point, read an, Sam Altman wrote an article that came out on this blog and he said, we already have a great example of a misaligned AI. That’s your social media algorithm, right? It is, you know, meant to, you know, engage viewers and get people upset.
Erika (22:17) Yeah, and like, sorry.
Yeah. Well, and like with the news aspect too, like I’m already very distrustful of like human based news reporters. And like, I take a very critical eye to like any claims or any arguments or any opinions. And like, I basically don’t, don’t put any credence in like AI generated news as
Bethany (22:37) so good.
Erika (22:58) as much as I can, you know, spot it or whatever. So like, yeah, I mean, I don’t know how many people are the same and how many people are that critical. And not to say that I can’t be duped or fooled. I’m sure I’m absolutely fallible, but I’m also like very distrustful of the technology. And I don’t think that unless, like you said, there’s some…
major shift in transparency that that will ever change for me either. It can do amazing things, but until I can ask it and believe what it says when it’s telling me where it got this information or why it thinks the way that it does, then I’m gonna be inherently skeptical of the output.
Brittany Ellich (23:41) One comment there, and then I swear we can move on. But one thing that I think is interesting is there’s also been this shift recently in the last couple years towards video. Like everybody wants to see video. Like even when we were starting this podcast, we were doing some research and everybody was like, everybody wants a video podcast. So I wonder if that part of that is because of that like authenticity you get from seeing like actual people talking.
And that’s part of like, you people don’t want to just read something or listen to something because that’s easier to fake than video. Not like you can’t fake video. But I’m curious if like, you know, looking for that authenticity has some sort of impact there too.
Jonathan Tamsut (24:14) I also think one kind of disheartening thing, at least in the short term, business leaders are already sort of eager to replace humans with AI. This is something like Mark Zuckerberg claimed that 33 or something percent of code at Facebook is gonna be written by AI. If this does sort of pan out, I think there’s certainly gonna be a period of like…
you know, this awkward period, hopefully a short awkward period of like upheaval where, you know, capitalists are using this superhuman labor. Human labor is obsolete, at least in totally or partially. And we need to figure out like, well, what, you know, how do we, how do we align our species, our society to sort of, you know, give everyone, you know, the ability to live a, you know, fulfilled life and
meet their basic material needs. It is going to be interesting to see how this pans out. I’m not super encouraged by the speed at which government adapts to changes, and I hope there’s not going to be a lot of pain and unrest.
Erika (25:17) Yeah, well, that’s that is also a point that’s not captured in this article is or article but the report the summary is the is the factor of like corporate greed and corporate interests. It’s only really mentioned as like a slowdown aspect of like, well, like corporations might not be able to adapt to the speed of AI research. It’s like
Okay, no, but there’s also a side of it where currently we’re seeing, like you said, a lot of tech leaders, lot of corporations stoke the fire and hype this up for the sake of sales or shareholder values or do layoffs because they claim that this is proof of the speed and the progress of AI.
And yeah, they didn’t talk about that at all in this, but at least currently, it’s a major factor in the speed and the growth and the progress.
Jonathan Tamsut (26:10) Yeah. And you know, we’ve already seen it. I mean, I mean, right. There’s waymo’s driving in Los Angeles that are taking jobs away from from Uber’s. And so, you know, I don’t know at what point the government is going to have to step in. You know, maybe they’ll have to be 10 percent unemployment or whatever. I mean, this is, you know, potentially going to be a massive upheaval that, you know,
I hope policymakers are watching and I hope we can sort of navigate.
Bethany (26:34) Yeah, I kind of think of the FDA in terms of regulation. At some point in history people were just making drugs and giving them out until there was a regulatory board to say, okay, you have to go through this specific process to make medicine and it has to pass these tests and have to be validated by this board and things like that.
I almost wonder if tech is going to go through something similar where more and more technology is vetted through regulatory agencies and things like that. So I think it is an interesting point we’re at where there’s really not much oversight from the government. And that’s helped made a ton of progress, but also at what cost. I think it’s a really interesting period and it’ll be interesting to see how this kind of causes ripples throughout.
history and technology to come.
Erika (27:25) Yeah.
Brittany Ellich (27:26) I’m
pretty skeptical that the government is gonna be capable of keeping up with things just based on what we’ve seen so far. We already have so much evidence about how harmful things like social media are to the mental health of children, just as an example. There’s so much evidence that supports that it is really, really bad for kids’ mental health. And I mean, you still have people doing basically nothing about it.
And so I think that this is a very interesting topic. I’ve heard the idea floated around a little bit about a code of ethics for software technologists. And I think that would be more successful than relying on.
the government to step in. Like I think that there is a really fantastic opportunity here for people who build software to come together and regulate ourselves in some sort of meaningful way, kind of like, you know, like doctors have like the Hippocratic oath that they have to, you know, meet. I feel like that is gonna, that’s not necessarily more likely, but probably gonna be more successful than relying on the government for regulation, because it just moves too slow.
Erika (28:25) Yeah, I think that’s fair. And you also have to think of the motivations. Like, okay, why has there not been any social media policy enforced? We’ve seen Mark Zuckerberg on the steps of the Capitol. We’ve seen lobbyists, they’re putting money into this to prevent it. Until that stops or until somebody steps in with more money to…
fund the opposite side, it’s not gonna happen. yeah, think you’re right that we’ve seen great success in open source communities. It always blows my mind that there’s some of these amazing groups within technology that sort of self-form to, yeah, to…
to enforce certain standards on their own. And I think you’re right. I think that’s a probably better place to put our focus because otherwise we’re probably gonna be kind of shouting into the void and yeah, hoping on something that’s like not gonna happen. Yeah. Cause I mean for, yeah, for,
both reasons, like you said, like on the one hand, a lot of these like legislative bodies don’t even know how to regulate these technologies. And that’s not to say that it’s an easy question to answer. Like, you know, it’s also hard for me to even say like, what would regulation look like outside of sort of some of those transparency aspects.
But yeah, I think the researchers themselves or the technology community are gonna have a better idea of that. also, yeah, it sounds a little altruistic, but probably have purer motivations outside of whatever’s motivating Washington these days, which is probably money and lobbying and…
power and self-interest. I wouldn’t necessarily, Zan’s very sort of skeptical of me, but I wouldn’t necessarily think that anybody would be motivated to do that, but we’ve seen it in aspects of technology, especially web technology, so I do believe that it’s possible.
Jonathan Tamsut (30:36) I don’t know if you guys ever, sometimes I like going on YouTube and watching mashups of these tech CEOs at congressional hearings and having, just seeing Congress people ask them totally nonsensical questions, it’s really funny. And then you kind of just see their response. It’s also like maybe, I think it is heartening and I think it is a good thing that there are multiple companies doing this. I mean, if this was just Google, so hopefully these companies can…
kind of counterbalance one another in some way. But I guess sort of the converse to that is now they’re sort of incentivized to compete with one another and race towards market dominance, which may have a deprioritized safety.
Okay, well I think it’s time for our fun segment. And I thought for our fun segment, we can all go around and we can talk about our P-Doom. So P-Doom is a term in AI safety that refers to the probability of existentially catastrophic outcomes as a result of artificial intelligence. So basically, what is the probability of AI
basically killing us all. So it’s the number between zero and one. There’s actually a website, we can link it in the show notes with sort famous AI researchers listing their PDOOM. So you can provide a number or you can provide a range and maybe an explanation. I can start. Yeah, I’m really, this is totally not super well informed, but I’ll say my PDOOM is between point.
1.5 and 0.25. I think maybe, you know, I think there’s, it’s possible we’re in a hype cycle and there’s gonna be difficulties employing this technology. think that’s probably, you know, until I’m proven wrong where I stand, but definitely if these, you know, these sort of technological advances come to fruition, that’s pretty scary.
Erika (32:18) Sorry, I thought you said from zero to one.
Jonathan Tamsut (32:20) Yes, so from 0 to 1, so I said 0.15 to 0.25.
Erika (32:21) Sorry.
got it, okay. My brain was filling in the numbers wrong. From zero to one.
Yeah, I haven’t thought about this very much, but…
I think I’d probably land around like 0.2. Yeah, like I think it’s possible, but I think that there’s a lot of mitigating factors and I don’t necessarily think it’s likely.
Bethany (32:44) I’m gonna say for me it’s probably like more like .01. Like I… We’ve survived this long. I’m not confident that AI is gonna be our downfall. Call me a hater, I don’t know. I… Like never say never, which is why I would say .01, but I highly doubt this is gonna be our downfall. But yeah. Hopefully the world does not prove me wrong there.
Brittany Ellich (33:06) Yeah, I would say I’m probably at about a .05 to .1, maybe a little bit less confident than Bethany. But, you know, I think…
These, it’s gonna evolve and there’s the possibility that it’ll go crazy, but that’s also, you know, assuming that we aren’t able to make changes along the way if we see things heading that direction. And I feel pretty strongly that humans want to continue living. So just like as a collective species, typically. So I think that if something big does come up, then we’ll prevent it. I think it’s also important to say that one of the guys that wrote this article, his P-Doom is 0.7.
which is also good perspective under which you read the entire thing.
Bethany (33:47) think we can guess which outcome he chose to hire in the reading.
Erika (33:51) Yeah, I’m also curious what podcast you listen
Jonathan Tamsut (33:53) Yes, well I hope he’s happy.
Erika (33:55) Brittany was saying that she listened to some podcast interviews and stuff like that with the authors. And, I’d be curious, maybe you can link those out to, to, hear some of those and find out more of their perspective.
Brittany Ellich (34:07) Yeah, I’ll
share the next next big idea podcast. And I’m also going to share in the show notes the Apple paper that just recently came out. We can probably do another talk about that because I’ve heard it’s sort of like a counterpoint to this. That’s a little bit more. Not balanced necessarily, but the different viewpoint of I think it’s called the illusion of thinking, so I’ll share that as well.
Jonathan Tamsut (34:27) I read that paper yesterday and I have a lot of thoughts on it.
Yeah, I think that would be an interesting idea. I mean, there’s kind of the question of like, do these models actually reason, have like generalizable reasoning skills? And there’s kind of controversy there.
But yeah, pretty fascinating stuff. Thank you all for the great discussion. It’s always nice to hear your perspectives and I appreciate that. So without further ado, I thank you listener for tuning in to Overcommitted. If you like what you hear, please do follow, subscribe or do whatever it is you like to do on the podcast app of your choice. Check us out on Blue Sky.
and share with your friends. Until next week, bye bye.