Byron Reese is an entrapreneur, public speaker, author, currently the CEO and publisher of Gigaom, a leading technology research company. As a technologist and futurist, Mr. Reese brings his passion for history and philosophy to the readers shining a light on social dilemmas that we already face, and will face, as humans. His most recent book, "The Fourth Age", explores the essence of humanity through the lenses of philosophy applied to technological change, using an objective—yet positive—light on how the coming age of machine automation and AI will reshape our future, potentially ending war, ignorance, disease, hunger, and poverty—while also exploring less favorable scenarios that humanity should do its best to avoid. You can follow Byron on Gigaom, on his website byronreese.com, and listening to his podcast "Voices in AI".
"In the end our challenge is to be great ancestors."
— Byron Reese
Transcripts are automated. When I reach enough supporters, I will add human-curated transcripts for every episode, making The Hoomanist's interviews accessible to everyone. You can support my work now.
Byron Reese (guest): I used had this habit of writing things on my hand, that I needed to remember to do that, so I still had this reflexive habit of looking at my hand all the time. If I had nothing on my hand, it was like I didn't have anything to do and I decided I wanted to never think that. So I did. I put what the ancient Romans called a "Memento Mori" a reminder of death. So I actually went into a tattoo parlor and got this spot. It looks like somebody just took some ink and jammed your hand, once, just this little spot of black ink and every time I look at it I think "Ah, I have something I should be doing right now because I'm mortal and I'm going to die". And I think in the end our challenges is to be great ancestors and that's what I try to be. I included this Dr Seuss quote once and it said, don't cry because it's over smile because it happened. I want those who come after me to smile because it happened.
Simone Salis (host): I'm Simone Salis, and this is The Hoomanist with today's guest Byron Reese.
S. Salis: Byron is an entrepreneur, public speaker and author, currently CEO and publisher of Gigaom, a leading technology research company, as a technologist and futurist. Mr Reese brings his passion for history and philosophy to the readers, shining a light on social dilemmas that we already face—and we will face±as humans. His most recent book, The Fourth Age, explores the essence of humanity through the lenses of philosophy, applied to technological change using an objective (yet positive) light on how the coming age of machine automation and AI will reshape our future, potentially ending war, ignorance, disease, hunger, poverty, while also exploring less favorable scenarios that humanity should do it's best to avoid. You can follow Byron and Gigaom and on his website, byronreese.com and listening to his podcast, Voices in AI. Byron I read The Fourth Age, your most recent book, in a little bit less than 48 hours.
B. Reese: You're probably overdosing on me right now. Did you really read the whole book?
S. Salis: Yeah, I read the whole book in a little less than 48 hours. It is so incredibly dense with information and data. Well, thank you. And uh, you know, it wasn't intending for you to our spiral. So very interesting because your transition is quite seamless from the history of humanity of what happened in the past technologically speaking to the future. What will happen when will eventually achieve artificial general intelligence is it's so seamless. That almost sounds like a history of what didn't happen yet. Beautiful. You're this is, is that we're entering a age of drastic changes. I'm starting to from technological ones, any my be in big part due to the achievement of humanity, of creating an artificial general intelligence which is different from a narrow artificial intelligence in you. You start big questions based on this assumption that we might achieve that. So you, what is the difference between a general artificial general intelligence and a narrow one and why does it matter so much for an eventual forthe agent? We might be entering, as you theorize,
B. Reese: do you know it's unfortunate that we use this term artificial intelligence so broadly because on the one hand it's used to describe commander data from Star Trek or c three Po from star wars or something that can interact with you like a person, but it artificial intelligence is also a anything that responds to its environment, so it's a lawn sprinkler system that automatically comes on when your grasses dry and unfortunately those are both artificial intelligence and we don't have a lot of nuance between them. The reason that's a little bit of a problem is they may not have anything to do with each other. It may not be the that, the general intelligence to the robot like you see in the movies. It isn't just a better form of that long thing. It may be completely unrelated. In fact, all we know how to make his narrow artificial intelligence. He's these computers can do one thing. They can play chess, they can spot span, they can do one thing well, but that doesn't mean that when they can do two things in for than 20, then a thousand a million that somehow one day they become like us. We are more than just being able to do a thousand different things. We are something beyond that and that's a general intelligence and it, like I said, we may not have even started working on that technology yet. Certainly I don't believe anybody knows how to build it.
S. Salis: Yeah. You mentioned the, um, unifying different functions from narrow intelligences and creating like a Frankenstein of have is like if you add enough functions together, it might be an emulation of abilities. There are part of a human abilities and skills and beyond that. But does the thing you touch a topics like a consciousness, how consciousness arises, who is human, how do you define humans? And um, in general also social and economical, uh, debates that are going to happen in the next decade or so. Like our ai's going to the headlights I use recently. Are they all going to take our jobs? And and where are we going to do if we don't have anything to do or they're going to be our overlords. And so your work is mostly discerning between the scientific theories and studies and also what is dystopian and utopian, but out of all the dystopian or Utopian possible development. So the implications of Agi you, you seem to be quite optimistic on the outcome to me, like you try to be as neutral as you can, but I think that among all the dangers you see a great potential in the future development of Ai.
B. Reese: That is correct. As you point out, the book starts with old technologies and I really got interested in what technology is and why does it affect humanity so dramatically. And I, I came to a really simple understanding of it, which is technology is just things that multiply human ability. That's why you can move more bricks with a wheelbarrow, then you can carry and then better technology, like a forklift, you can move even more bricks. And so technology empowers us to do more. There's a reason that you know I don't work harder than my great great grandparents, but I live a more lavish life than they do. Why? Because an hour of my labor just yield so much more than an hour of theirs because of technology. And so when I tried to do was say some technologies come along and they just make our lives easier or better or what have you, but sometimes the technology comes along that so big like language that you can't even imagine the species without it.
B. Reese: You can't imagine humans kind of without cities, without the division of Labor. So what I can tell is that for the last hundred thousand years when we come up with new technology, for the most part, we use it to increase human productivity and that increases human standard living. And there's no reason in the world to think that's going to change. Ai in a sense is given an artificial kind of aura of mystical nis or power, but it's a really simple idea. And the simple idea is take a bunch of data about the past, study it, look for patterns and make projections in the future. I'll just say one more thing real quickly. Twenty five years ago, the mosaic browser came out and if you said 25 years from now, 2 billion people, 3 billion people are going to use this technology. What is likely to happen? And you could have said something like, well, people probably aren't gonna mail this many letters and travel agents are going to have a hard time and newspapers are going to have a hard time and you would have been right about all of that, but you would never have realized that there would be an open source movement that people would write code and share it for free.
B. Reese: You would have never said, oh, there'll be an encyclopedia that people will will work hard to. I'm on for free. You never would have said, oh, they're going to be these forums. Anybody can post any problem that other people are going to help them. You wouldn't have seen etsy or Ebay or google or Amazon or or any of the $25,000,000,000,000 with the wealth that's created and all the Internet was was letting computers talk better. That's it, and so when you made computers talk better and you get all of that, imagine artificial intelligence is a technology that makes people effectively smarter and if you woke up tomorrow and everybody was smarter, that's going to be a good thing. If that isn't a good thing, then by extension it would be better if we all woke up with 10 fewer Iq points tomorrow and I just don't believe that.
S. Salis: I think technology is great for emancipation. Definitely like enables creation and gives tool democratizes tools. I agree with all of that. I'm under don site though. If you wake up in the morning and you get smarter because another device makes you smarter and a degradation of memory and language abilities because you're connected into devices making inspiring. Do you wake up smarter or do you wake up a little bit less of Iq, but we need a bigger tool.
B. Reese: That's so fantastically interesting question and it's an interesting old question. You know, Plato was no fan of writing. He was because he wrote, but he, he predicted that when writing became widespread, our memories would get worse. In fact, he even said with writing, you haven't made it to offer remembering anything. You've only made a tool for reminding you of something. And it is true that back in those days when you had those oral histories and yet the odyssey and all of these things that people memorized, if you wanted to know something back then you had to remember it. You know, so you're, you're entirely right. By the way, a man who used to carry bricks on his back, who gets a forklift may not be as strong, but it doesn't matter if, if, if I have a tool that I can just hover over somebody and tell what illness they have and what to do about it, why is that less perfect than if I knew that myself.
B. Reese: So you're right, you're entirely right. I do math well in my head because I had to and my kids don't because they back back when I was a kid, you would say, why do we need to learn this? Can't we just use a calculator? And the teacher would always say, well, you're not going to have a calculator with you all the time, are you? And of course you are because we do. And so it's true. Our memories are worse because of writing and we do math in our head worse because we have calculators and you're right, we're only going to effectively be smarter, but we won't actually be smarter.
S. Salis: Okay. So there is a little bit of trade dean. We'd actually need intrinsic capacity of the human brain of executing some tasks and you're gaining like a general ability and
B. Reese: it's always hard to see because if you went back in time and you said to those people like there was a Roman general who knew the names of all 20,000 of his troops and their family members names. If you said, hey, in the future you will have any fact you want at the tip of your finger, any knowledge in the world, however you will have trouble remembering your four digit pin number. He would have said, thank you. That does not sound like a human, but we're on the other side of that and we don't feel any less human than they do.
S. Salis: There is, you know, there's starting to be a little bit of more critical voices to advancements without an ethical and moral check on them lately, in the past few years. One of them, his name is Jerome Linear and he's the author of one of his most recent books is 10 arguments to delete your social media accounts right now. And the other book that he recently wrote it, who owns the future and you're not a gadget. And um, he's one of the fathers of Vr. And he explains that the current definition of ai is not just a case that it has a mystic aura around itself. It's because in itself, at the current state, ai mostly means machine learning through ingestion of big data. So what is the difference, which means a yukon from what I understand, you collect enough data and you find enough patterns that you can statistically kind of predict what's going to happen and so statistically make a decision on when to restock the warehouse at the right time for Amazon to not have a not enough products to ship to somebody or at the same time in my become, it's very wide. It might become what a person on facebook, my dual, when it's the best moment to influence them to sell a specific product for advertising and then there are like, you know, actors like Cambridge Analytica that might use that in a in a sense that we don't intend to use it to. What is the difference between machine learning and current state of AI and true artificial general intelligence? When can we distinguish that?
B. Reese: There's a nebulous concept called intelligence and there's no agreement on a definition of that and that seems weird, but it isn't because there's no definition of what life is or what death is. There are all these big ideas that we just kind of know. And among intelligence you have natural intelligence, which is what we have, and then you have artificial intelligence and among artificial intelligence you have two different kinds. Again, you have general intelligence, which is a machine as creative and as versatile as a human, and then you have narrow intelligence and that's a machine that can do one thing. Now within narrow intelligence, there's. I can think of five different techniques. I won't go through them all that we use. We use evolutionary algorithms to evolve solutions. We use, we use ai to model a system, blah, blah, blah, blah, blah. The one that everybody's excited about now is what you just described, machine learning where you you, you nailed it.
B. Reese: Exactly. You take a bunch of data about the past, you study it, and you make a projection about the future. Now two things, to answer your question first, it's where that works and doesn't work. The philosophic assumption behind that is that the future is like the past, so I can train it to identify a cat because the cat tomorrow looks probably like a cat today, right? But I may not be able to use it to identify your cell phone. Right. Is a cell phone, you know after September twelfth is a cell phone still going to look like the flip them we had 10 years ago. So there's a lot of questions around language. Is Language like that? Can you predict the next thing I'm going to say? Just based on all the data about everything I've said before, banana seat, like you wouldn't have known. I was about to say banana,
S. Salis: right?
B. Reese: Exactly. Exactly. Now for an artificial general intelligence, that's a system that's creative, that's a system that can do like a macgyver can figure out all these new things. I think it's an unproven point that in fact, I don't know that I've talked to any practitioners who believe simply studying and f data is going to be able to let us build that. I can think of a single person who's ever told me that. Yeah, it's just enough machine language. I mean, we had this test called the turing test, which is says if you can't tell, if you're chatting with a robot with a computer or a person, you might have to say the computer's thinking and the first time, every time I see one of those systems, I asked you the question, what's bigger? A nickel or the sun. Now I've never had a single one get that question right, but a human has no trouble with it and so you could imagine that maybe enough data, maybe it might be able to get that, but then you get all these more complicated questions that I have nuance and you have to make assumptions and you have to guess and then you have to say, well, maybe there isn't enough data and then that doesn't even get you to creativity and inspiration.
B. Reese: That doesn't get you somehow to jk rowling writing Harry Potter or Lin Manual Miranda Writing Hamilton or banksy writing to making the graffiti that he makes. I don't think any of. That's really what machine language can. I'm sorry. I keep saying that machine learning can do. I just don't think you can study the past project the future and do any of that general intelligence. Do you watch westworld? I am required to watch all to. Everybody asks me about every one of them. So I have to watch to watch every episode of Black Mirror advocacy. Every movie. I have to watch every bit of it because they really asked me about it. I have seen every episode of Westworld. Yes.
S. Salis: Um, then it not about the series itself. I don't want to get into like how good is it, how realistic it is. There is just one detail that, speaking of what we were talking about came to my mind, what makes the difference between true intelligence and, and making those beings apart. I found it very interesting that he was improvisation. They mentioned it a few times. They say that there is this improvisation, like everything is predictable so far as machine learning data and big data. And then the one difference is that improvisation is what makes those robots, um, start something different. So to have a little bit of conscience or something. And I'm an improviser and I find it incredibly liberating and I find it incredibly constructive. How do you break out of, how do you create true intelligence? I am not a scientist. I'm not a physicist. I'm a biologist. I am an improviser. I'm a host. I'm a content creator. But a thought. Isn't it interesting that improvisation in that show is the key to break out of behavior, program behavior and machine learning?
B. Reese: They call it what the reveries it's the, the little remnants of past incarnations that somehow manifest themselves in different ways. Yeah. We don't know how to make a machine do that. The only way we would make a machine do it is we would randomly give it random behavior, but that's different than a proposition.
S. Salis: It wouldn't be pretty dangerous because you need to
B. Reese: within a narrow.
S. Salis: Um, but one of the ways that I think ai and machine learning is evolving fast is when you have to challenging ais that come from themselves on a problem, try to solve it differently. And there is one that tries to solve their problem in a way it fails. The other one traces Soviet the other way and so when those two challenging the ice, there was a paper confirmed themselves, then eventually you have a greater chance to find a solution about it. So I thought that that was seminar to improvisation.
B. Reese: There is an interesting corollary to that. You probably remember when Alphago defeated Lisa, go move 37, 37. Exactly. So the setup for the listener very quickly is that he's playing the best player in Africa is playing the best player in the world and makes this move. That causes everybody to gasp, move 37. What's this a good move, a brilliant move. Was it a bad move? Did it mess up? And then the people who who even made Alphago, they're like, what? And so they like open the system up and they're like, only one in 10,000 people would have ever made that move. And Lisa Dole, the player looks at it and he said it was a brilliant move and this is a moment people began talking about Alphago's creativity. And the question to ask is, was alphago creative or did Alphago stimulate creativity? Or is there no difference between those two states and that to me is the core question. Does the computer improvise or does it emulate improvisation or is there a difference?
Speaker 3: Yeah,
S. Salis: I just started a new mailing list. It's the humanists weekly digest at text on leak curated collection of interesting links and articles that you wish a good friend would have shared with you. It's delivered every weekend to your inbox as simple plain text and you can subscribe now for free on human.ist/subscribe. I am a. This is synthesis. The homeliness podcasts with today's guest is CEO of Gigaohm. You say that, um, you know, computation itself is profoundly philosophical. What do you think that computers are? Philosophic aluminum foil.
B. Reese: There is a viewpoint that says that the core, everything in the universe is computation. So you watch a hurricane and the winds blowing around, but what's really happening? There's a lot of physics going on in there, right? All Forms of, of gravity and different forces are acting on things and temperatures are changing. And, and so you could squint and you could see that entire hurricane is just a computer program running. It's just a mechanistic and then you say anything in the universe at some level can be reduced. Perhaps this is a theory, the computation that everything is computable that you could start hypothetically at the big bang and everything is just computation that made us here today. And if you think of it that way, then a computer which is just doing computation is the same kind of thing as any other thing. That computation is a form of reality that is like, um, like the hurricane. And so we don't, I don't know if that's true or not, I really don't and I don't even have an opinion about whether it's true or not, but it is, it is a different way to think of a computer as something that is much more like us and like a tree, unlike a hurricane, then it is something completely different.
S. Salis: If you write a book about all of all of this and a d dot, you don't form even even, um, minimal opinion.
B. Reese: Well, no I don't. I don't want to say that I'm just saying that one particular theory. I don't have an apple watch because it's, it's beyond. I mean like, yeah, I understand. Stephen Wolfram is a big proponent of that and, and I've heard him explain it and I don't understand that, so I don't want to have an opinion about something that I don't understand. I can describe it. You know, you may read what I just said and said, yeah, that's largely accurate, but I don't know if that's.
S. Salis: Do you label yourself agnostic in that sense?
B. Reese: I would say this one I tried to say in the book and I don't hide any of my views. I'm happy to do share them. Yeah, absolutely. When I kept saying in the book is my views aren't Germane to what I was trying to do in the book, which is it doesn't matter whether I think you have the soul or not. The question is, do you think you episode, it doesn't matter if I think the world's materialistic. It matters. Do you think it until it was inappropriate for me to project myself, I will say that I am unconvinced by the materialistic view and and I mean that I you had this brain you don't understand and maybe that's fine. We don't understand the brain, but then we have something called a mind and the mind is all of these things that the brain can do that seemed kind of mysterious like you have.
B. Reese: It has a sense of humor, whereas I doubt your liver or your kidneys have a sense of humor, right? So the brain has these properties that are really mysterious and then we have consciousness. Now, consciousness is something people say, we don't know what it is, but that's not true. We know everybody agrees exactly what it is. It's the experience of being. You know, when you think of it this way, you can feel warm, but a computer can measure temperature and that difference, that difference between experiencing the world and measuring the world. That's consciousness. Now, as far as scientific questions go, it's been described as the last great one. We don't know what the answer would look alive and we don't even know how to ask it scientifically, so the most immediate fact we have about ourselves is that we experience the world. That's the primary thing, you know, and we don't have any scientific way to explain how matter can experience the world and so I am completely unconvinced that that is materialistic.
B. Reese: I had no reason whatsoever to believe that it is. That is not to say that it isn't and you know, I find the amount of evidence to me that suggests that it is the, I find to be zero. I can give you the logic. The logic says life is biology. Biology is chemistry, chemistry is physics. Everything's physics one way or the other. That's physics. You don't understand it. That's fine. It's just physics. It's physics, it's materialistic. We can build it, will make a machine, but I do not in any way shape or form except that.
S. Salis: Okay. Can he be both? Because the way that I see it is like that Stephen Wolfram might be right when he says the universe maybe with just a few lines of code is starting today and uh, but, but at the same time, I think you did, I think, I think Sarah Basket, how many lines are we talking about? And he said, oh, not that many. Twenty or 30 or 20 or 30 lines that said then you'd just iterates over time. That's it. It's like a fractal that evolves. That's exactly right. And um, you know, but I don't see that personally and this is just, you know, I understand that you have a very specific and scientific and research in point of view in your book, but I don't want to particularly step aside, but just, you know, chatting between human beings with consciousness, I don't see that materialistic view as incompatible.
B. Reese: Yeah. I mean it would be a big idea if a cloud of hydrogen somehow became and then named itself, that requires a lot more faith to believe that I have. I'm more inclined to. I'm more inclined to believe Yoda. You know, when he says luminous beings, are we not this crude matter, it feels much more consciousness, feels much more luminous than it feels mechanistic to me, but that's a subjective experience. That's just just me. Uh, and that's why I don't put any of that in the book. I don't. I mean, could you even tell what were you able to see my preferences are predilections through it when you were reading it or.
S. Salis: No? No, no, no, no, no, no, no. I will absolutely give you that. You are very impartial. You have an incredible amount of facts and uh, your arguments are constructed almost almost as a tax book. Again, when I say that it's a history of technological advancements for human beings in a history of what didn't happen yet, it's because you explore every conceivable possibility. You know, you have arguments and counter arguments. You have dystopia and Utopia. It doesn't seem. I can see to me that you, at the end of the day you have a sense of optimism or maybe that's what I want to return it.
B. Reese: No, that is true. I mean, the last section of the book says in the future there'll be no hunger and there'll be no greed and all the children will know how to read. I mean,
S. Salis: yeah, there is literally, you know, in, in one of the explanations you have towards the end, one is titled No more war. And um, you talk about the positive accomplishments that we can potentially have. We're touching artificial intelligence and you literally say I'm the conditions that foster war are vanishing, the lower per capita GDP in a country is the higher likelihood for the future war. So for future wars. So if we end poverty, we're do sport, that's the sense. And uh, food insecurity is also a predictor of future conflict. So if we beat hunger, then we end war were do swore, illiteracy. If we be that, we also, uh, you know, reduce it because it reduces poverty and, and, and so it's that kind of chain, like ai will ignite a series of changes that would reduce the factors that make war meaningful to human beings. Although I'm and I want to be up to at the end of the day, I am optimistic like that, but the first implementations of machine learning and ai that we are seen by some states or countries like China for example, is starting to deploy facial recognition and big data analysis to he's police officers to go in some of the Muslim regions and send them to educational camps.
S. Salis: They're sizing and analyzing tax conversations on we chat and track them overseas depending on the apps that they use, like the first deployment of machine learning and facial recognition. And this computing. Of course there are a million amazing things that we can do a weekend prevent, prevent heart diseases. We can, we can not hard to do this. We can learn a lot about medicine and we can start to find cure worse. That eventually would be impossible to find, if not through this data ingestion by it. How do we limit, uh, and keep to the optimistic side of the future deployment and use of these technologies in a malicious way because it will be inevitable. But how do we successfully limit that?
B. Reese: Well, that's a fantastic question and I'm not going to give you a Glib answer, nor am I going to deny a, you know, any of the base shit. Fuck, she just said it is true that in the past we could all, we all maintain anonymity largely because there were so many people, but no government could listen to every phone conversation. It couldn't watch what everybody did. And it is true that with these technologies, the very same tools we build to do good things like spot cancer can be used to spot subversive people. It's all the same technology. And so you're right and there's no easy way around that. I will say however, that in the end, um, people have to decide that they value that not being the case. And then they have to get their. And, and I'm just talking right now about free and open society.
B. Reese: They have to get their legislatures to ensconced in law and then they have to make sure that safeguards are in check. So that is not done. And then when, when it is, inevitably you have to make sure there's accountability. The price of liberty is eternal vigilance. I mean, that is true and so it's, there's no easy way. The short answer and open free societies is that people have to decide that, argue, that and demand that, that their legislatures enforce it. There are other parts of the world though that are authoritarian, that will not. I mean, and I mean I'm talking about places far more draconian than, than, uh, than China and, and the like, that they have a different journey to go. They have to get accountable. Government first and then, you know, governments are resistant to change and, and, but the good news there, the reason people I don't think should despair is that democracy is sweeping the world.
B. Reese: If you were to count the number of democratic countries after say World War, I'm going to mess up all the numbers. I apologize to the listeners, but something like 14 and now there's 160. It is transparency. These same exact technologies increase the power of people to connect and protest the government. We saw that in Arab spring. The same technologies allow access to information and people can write and distributed to likeminded people, and so all of these things, it's like Deja said, once you look long in the abyss, the abyss also looks into you. It's like these technologies can be used for good and they can be used for evil and it's incumbent on us to use them for good. Now, the final piece of good news I'll say is that more people will use them for good than evil. How can I say that so confidently?
B. Reese: That's an easy one for me. You know, there was a time 100,000 years ago, we think when humans got down to just a few thousand meeting payers, I mean we weren't endangered species and one, one epidemic could have wiped us all out and even then for tens of thousands of years, it took all of our time just to find food. And then when we had cities 10,000 years ago, it took 10,000 years. It took 90 percent of us to grow our food and then we slowly made this better and that better and this better and that better. And we invented human rights and we outlawed cruelty and we, we outlawed torture for entertainment and we did all of these things. And now we're in this world where everybody's empowered, everybody has absolute absolute access to knowledge and communication and all of this stuff that hitherto didn't even exist in the world anywhere on the planet 30 years ago. And, and that to me is a story of us getting better and better and better and better. Had we disproportionately been destructive, we disproportionately been evil. How would we have ever made it through all of that too. Here we made it because most people are interested in building not destroying.
S. Salis: That's good that that's true. So that's the optimistic part of the book at the end. Like that's what also comes through the very last chapter that at the end, yeah, everything can be used for evil or good, but mostly human beings try to build and not to destroy who I had in their own journey. Um, you know, you also mentioned how there is this illusion that some jobs will go away when at the end, uh, you. Do you use the example of the ATM, the automatic teller machine when it came around you'd say, well most bank tellers were like, well, we all know what it stands for, automatic so we won't be needed anymore, but there is a report of their own job and whatever they do and you think at the end the ATM generated a lot of jobs and, and you know, because there is that the condition that needs to repair the machine and there is the guy or the woman that puts money in it and, and, and there was a game. They're in the reasons the factory that builds the automatic teller machine. So at the end, what is, what is, do you have a contract argument? You show, you display a counter argument to all the job losses that we see in the headlines. They will come because of ai in the future. Right?
B. Reese: Right. And to be clear, there's actually more colors than there were then because the atm lowered the cost of opening bank branches. Banks opened more branches and everyone needed teller. So there's actually thousands more tellers today than there were when that came out. So I would like the book explores three possible futures about jobs. Yes. It says maybe they're going to take all the bad jobs and we're going to have permanent unemployment and it's gonna be like this dystopian movies. And then there's a view that says don't kid yourself. At some point the machine can do learn everything better than a human. And when they do that, that's it. There's zero jobs for anybody. There's no poets. They write the poetry, there's nobody who's a song writer. They write the songs. But then there's the third view. And again, I, I explore it if you want to know, but I think I think this third view it and it's the one that gets the least airing.
B. Reese: So let me, let me, let me lay it out there. And it goes pretty simple. I'm not just going to say, oh, it's like the industrial revolution. Let's, let's look at it a whole different way in this country. The other states, we've had 250 years of full employment. And by that I mean unemployment's been between five and 10 percent that whole time other than the depression, it was a 20 percent higher, but that wasn't caused by technology. So just indulge me that we got 250 years of full employment. Meanwhile, I think the half life of a job is about 45 years. Do I talk about that in the book?
S. Salis: I don't think you mentioned.
B. Reese: I don't think I do. I've been working on this lately. I think every 45 years, half the jobs vanish. So between nine slash 18 slash 15, 1900, half the farming job. Danishes 1900, 1950, another half, 1950 to 2000. Half the manufacturing jobs went and so forth. And I don't think we're seeing anything all that different right now. So my meanwhile, by the way, if every 50 years we'll just call it 50 for round numbers of every 50 years, half the jobs are going away. How do you have full employment? How do you maintain that? Furthermore, you have these disruptive technologies that come in like steam power. All of a sudden, millions of people that handled animals, draft animals weren't needed, but all these people were needed and then you had the assembly line, which is by the way a kind of artificial intelligence, a scary one if you learn to craft and then you saw this process that could make a better chair than you, cheaper with untrained people.
B. Reese: That's frightening. She had these big technologies coming out and again, no change on the planet and here's what happens. Here's the setup. You hear in the news a lot and I encourage people to listen for this. It goes like this. Technology is really good at creating high paying technical jobs like A. However, unfortunately it destroys jobs like order taker at Mcdonald's and then they say, and this is what I encourage people to listen for, do you really think that order taker is going to become a geneticist? They don't have those skills and you think, yeah, I guess not, but the answer is no. The order taker won't become a geneticist. A biology professor or college biology professor becomes that geneticist and then the high school biology teacher gets the job. At the college, and then the substitute teacher gets hired at the high school all the way down the line.
B. Reese: So the question isn't, can that order taker be a Janessa? So the question is can everybody do a job a little bit harder than the job they have today? And I think that's true and that is 250 years of economic history of why technology creates great new jobs for 250 years, destroys bad jobs that machines can do. And what happens? Everybody shifts up one notch. Everybody shifts up one notch. We can never run out of jobs because jobs are made the instant somebody takes a piece of technology and does something with it. So no, I think that the chance that you're going to see any bump in unemployment, uh, are nil. It just, these are technologies that make people more productive and they increase opportunities, increased wages. They're going to create these amazing new jobs are going to destroy ones at the bottom. In 50 years, half of all the jobs will be gone. Everybody will shift up.
S. Salis: You know, one of the other things would be that there will be more opportunities through automation and through decreasing of new jobs to start to get rid of oldest jobs that are automated, like you saying that in the factory in effect are you winning was invented the repetitive jobs that goes on for eight, 10, 12 hours, six hours, whatever. It does destroy the brain at some point, so I would be really happy if there was a chance for humanity through just at some point have to face the real questions of being on planet earth, which is not waking up in the morning doing a repetitive tasks in a factory in China for 18 hours and then go back to sleep and start over again. That would be also great. I wish we can achieve that.
B. Reese: I'll show you in the book that any job that machine could do, just imagine some job. A machine could do like A. I'm looking around my office window washer. You could imagine we'll will make a robot that can wash windows, right? If you make a person do that job, there's a word for that. Dehumanizing, dehumanizing. If you make a person do a job, a machine could do, you've just taken everything about them that's human and told them to put that on hold. You just need them to be a machine and I think that is a terrible, terrible waste of human potential. I think everybody can do things that only people can do.
Speaker 4: Imc Mona's, alice, and this is the humanist. You can listen to every episode of this show on human.ist and on your favorite podcast APP. I created the humanist as an independent media project for thinking logically aware contemporary humanists. You will find articles, a curated mailing list and older podcasts, interviews on human.ist. This is a challenging solo project that takes hundreds of hours each month with coding, writing, recording, editing, graphics, and publishing, and if you would like to keep enjoy new content regularly, please become a patron now on human.ist/support. Today's guest is Byron [inaudible], author of the book
Speaker 5: the fourth page.
S. Salis: I am a great fan of automation in a tray to apply it into my little bit of content creation to, for example, whenever I write a new article or whenever recorded a new episode, most of the tasks in editing and after editing of cutting, exporting, uploading, sharing through templates on social media are automated. Like after I explored the episode of most of the job is done for me except keeping a human touch in it. A newsletter I get like through mark down, I select articles that are collected through Zapier and formatted and sent to me. So to create the newsletter I basically have to read and send the newsletter. I don't have to format anything and that can eventually, if you harness this little bit of automation for independent content creators, it, it can be useful for us a little bit. You mentioned also universal basic income experiments in theory in case to supplement in one of the other scenarios.
S. Salis: I think number two that you explore into that book. Let's say that one day, um, we go past all this level of automation. We are freed with time and the only thing that remains to contemplate is life and death and we start to look for a way to upload ourselves or merge ourselves with our own machines. Like also, I know it's an incredibly wide topic to touch in discussion, but your book is free danced with that, but you analyze how once we reach a technological breakthrough and singularity, we might start to find a way to either manage our machine score to acquire those superpowers that they will embody for ourselves either by becoming them or merging with them. So you talk about, uh, the gold that Ray Kurzweil has been trying to pursue at Google for awhile. At least that's what is a, what is whispered between him and sent Outman of upload their own consciousness and brains to the cloud and reproduce that which might be a trick to give consciousness to a machine by simply injecting, uh, our brains into them and become them.
B. Reese: But so to be clear, I am not a singular Ian. No, I explore it all in the book though, when I tried to look at every possible scenario, we can either learn to implant computers in our brains. We can learn to copy of our consciousness into a computer that is completely possible if you believe so. It's a really simple idea, right? If you could, if you could take the state of every atom in your brain and model that in the computer and put that in the computer, then there you are. That's the logic oversimplified, but that's the basic idea. If you don't accept that, if you don't accept that, then that is really hard. I will tell you something interesting. I've learned doing the book and that is A. I opened with the question, you know this, which is what are you? And I give people three choices.
B. Reese: The first is that you're a machine and everything that happens in physics, you're a big bag of chemicals and electricity. It's a mechanistic view of the world. You wind the clock and the clock goes and that's it, and then I say there's a second choice. It you're an animal and that means you had this mechanistic body, but you had this thing called life and that we don't quite understand, or there's a third choice. That is you're a human and of course you're human, but what I mean by that is, yeah, you have a body. Yeah, you're alive, but you have something else. Your conscious. Maybe you have a soul, maybe something else. Now, here's what I learned that I think's interesting. When I asked people on my ai podcast, Bay area based people working in Ai, a 90 percent or more choose what option do you think?
S. Salis: I'm going to go ahead and say, human. I'm going to be optimistic. I try.
B. Reese: Even Stephen Hawking said, I don't have any illusion of an afterlife. I'm just like a computer. When I. When my body goes, it's like turning the computer off. There's nothing more than that and that that shouldn't have. It pervades that view of the world, which is of course we can build all this because we're machines. We have on [inaudible] dot com and on my website [inaudible] dot com. I have a quiz. When I ask people those questions and then I get a cross section of the country and 85 percent of those people, guess what? They're human. Correct. So this is huge disconnect. I have discovered between the vast majority of people, I will say that even when I had a draft of my book to my editor and it had, you know, one possibility. You're a machine he wrote in the column. Well, does anybody really believed that question mark? Question Mark. Question Mark.
S. Salis: Oh yeah.
B. Reese: Isn't that interesting? So that's New York, that's a humanities more all of that. Um, but, but the thing is that everybody, I know like everybody on my show beliefs that. And so I believe that simple difference of view accounts for why people have so radically different views about these technologies. Will ai take all the jobs, every single one? Well, if we're machines then Yara, if we're machines, eventually you'll build a mechanical human. It's that simple. But if we're not machines, then no. There are things that a human being only a human being can do.
S. Salis: Um, you know, I, you were, again, you're very clear in explaining to you expose these theories, but if you were to choose, I think I would choose one and three for myself because I think that we are machine human machines and uh, in your book you explained that if you think anything different from this, just for the sake of going ahead, just consider yourself dualistic. Jessica and Cedar Yourself, human. Just consider yourself. But I think that again, like in the first
B. Reese: how think that are you a giant clock work that if you, if you wind you up like how do you have free will or not?
S. Salis: Um, then I cannot answer. I don't know that I, that I cannot answer. At least I have the illusion of it if I don't have it right.
B. Reese: But the bay area people are like, nope, you don't have it.
S. Salis: Well, uh, I don't know. We get to the point where one need, do I need to believe that I have free will and thus I see it and I projected onto myself. And even if you don't have it, what it matters. It's like the simulation theory, are we a simulation of our ancestors and we freak out like Elon Musk does, or we just go ahead and enjoy the steak, like the agent and metrics. Thus when he gets rejected. Yeah.
B. Reese: Even that shows you the thing with most people would say this isn't the simulation because I had subjective experience.
S. Salis: You wouldn't know if it was fair enough. Fair enough. Um, but
B. Reese: oh no, I don't know. Something is feeling. So maybe not everybody just thinks they feel warm and they don't really. Some point it just sounds preposterous. And you think. No, I'm going to go with the simpler explanation. This is reality.
S. Salis: Yeah, that's it. That's it. That's reality. That's something that is um. Well. Alright. So I'm going to take you as a number three, the way you see yourself.
B. Reese: Yeah. Well, I'm a completely unpersuaded by number one, number one Catholic lane. Okay. So do you know who was it? It was a number three human, but hold on a second. There was one of those people that, uh, I'll think of it in a minute, a 17th century English person, and they asked him if he had free will and he said, well, everything I can reason says I don't, but everything I can feel says I do and I think we're all like that, which I think we have it, right? Sure. Feels like it. So I'm going to assume I have it because it feels like I do. I'm gonna assume unconscious and, and, and option one has never explained to me why can be now you don't have to necessarily get spiritual to be a number three emerge. Humans can have some kind of strong emergence humans. There could be something, there could be some fundamental aspect of physics we don't even understand that is happening in us. I mean, there's so many other things we could be. Then this simple. Every cause has an effect and every effect has a root cause. Physics explains everything. That's basically, physics explains everything. That's that viewpoint. I am unconvinced that, that is true.
S. Salis: Um, you, you also talk about reconsidering how consciousness my seat on top of everything as compared to physics, if I'm not wrong, that implies a, reconsidering the science as we know it. We're like, you know, physics, it's on top of own chemistry, chemistry on top of biology, biology on top of and etc. You just reverse that order for the scientific world. But if You see yourself as a human, let's say that we fast forward 40 ears. Would you upload yourself and hang out with rate cars, vial somewhere.
B. Reese: So is the question, what would you. Are you. But I had my brain.
S. Salis: Let's see. I'll go with a specific scenario. There is this scanner that can hold the configuration of yourself and the illusion of set pretty much like teleport. You make the example of teleport. I am a comic book here that explains exactly that. You get teleported inside a machine, you get a complete scan, your neural paths, get screenshot, screenshot a dead time. It gets uploaded and you wake up as a machine in this virtual reality or as a machine itself, and it is very much steel. Byron reese, number two, tween version of yourself is just outside of your original human body. Would you do that? And you know, as just a little game, and it might be simplistic, but I'm curious.
B. Reese: That's great. You know, another version of it is there's a machine you step into, it takes you apart one atom at a time, it scans it, and then a three d printer on mars rebuilds you and you step out saying, wow, that was easy. Would you do what I know. Thank you.
S. Salis: Why is that?
B. Reese: That again, it's a few number one, that's a mechanistic view that if you could capture all the information about every one of my items, that is in the end what I am, and if you could rebuild that on mars instantly, uh, but it's not me. It's a fantastically good copy of me, but that is no longer than me. You know, you remember I used the ship of theseus example. Oh yes, yes. It's called the ship of theseus. It's this, it's this famous greek ship in ancient times that they tried to keep, but every time a piece rotted, they replaced until eventually none of it was leftover. Is it the same ship? So it's the same thing again. It all boils down to what, what are you and I, I'm more than the information on the position and trajectory of the items that make me up. I Believe, again, if you're listening to this, the book isn't about what I believe any of this. I try to just break all these problems apart so that, that, uh, this kind of conversation that you and I are having is, is what I try to have with the reader.
S. Salis: This is just entertained to the measures that can allow us to understand a why there, where the book was written to in first instance. You're just threatened to make people reasoning the way that I am doing.
B. Reese: Correct. That's exactly right. And believe me, I could very well be wrong.
S. Salis: You closed the book itself with the example of the story of gilgamesh and the myth of that more or less with um, gilgamesh trying to defeat death. And become more told by eventually realizing that true mortality is achieved by what you do in the time that you have and not how much time you have and being remembered through what you achieve through life is what matters to most. Now maybe we will all be uploaded and understand that we stayed the same because the adl ourself, there is actually a dualistic plane of existence and just copy paste in is maintaining the same dualistic self, the physical self that we have. And we just switched bodies around. Maybe who knows, but let's say therefore now we can't accept on this planet at separate cars. Vale. And um, this is something that I asked to my guests, all of them in a different way, but I do the moment before you expire. We expire. You know, when it comes to that moment, one looks back. And, um, what do you think it would've mattered the most to you, byron reese? In this experience? This amount of time that you have, what are your main drives? what is some food that you consciously try to work on, on a daily basis? So it can be important to you when satisfactory for.
B. Reese: Well, interestingly, I've never talked about this publicly, but I think about my death every day. I used to have this habit of writing things on my hand that I needed to remember to do and so I still had this reflects of habit of looking at my hand all the time just because it would always be stuff there. And it Was interesting because if nothing was written on my head, this is what I was in grade school. So we're talking way back. If I had nothing on my hand it was like I didn't have anything to do and I decided I wanted to never think that. So I did. I put what the ancient romans, what people called on an intel mori, a reminder death. I put this. Can you see it on the camera there? This little tattooed spot. So I actually went into a tattoo parlor and got this spot.
B. Reese: It looks like somebody just took a deep pin jammed your hand with at once. Just this little spot of black ink. And every time I look at it, I think I have something I should be doing right now because I'm mortal and I'm going to die. Um, and so I think about my death every day and I also write a newsletter about my family, uh, every year. And on the, on the, I'd always has a letter at the beginning which is all about life and beginnings and always has a backpage about endings and death and always talking about my own death because I assume maybe this is the last one I'm going to write because you never know. And I think in the end our challenges is to be a great ancestors. And that's what I try to be. I included this dr seuss quote and senate said, don't cry because it's over smile because it happened. And, and I, I guess that is in the end, I want a, those who come after me to smile because it happened.
S. Salis: Thank you byron. Thank you for sharing the book and taking the time to share your thoughts with me. You can find the fourth age by simon and schuster right now on amazon and by visiting also byronreese.com. There are alSo plenty of talks, uh, including the tedx at austin, um, that you shared on your website and uh, you know, uh, just thank you for taking the time.
B. Reese: There was a lot of fun. You asked me a very different questions than other people. So it was, it was a pleasant change. thanks a lot.
S. Salis: Thank you so much. Byron Greece sheers thought provoking data and analysis of how ai and technology will reshape society for better or worse with extreme clarity. Byron lays out complex philosophical concepts for the reader to form their own opinions about the fourth age itself.
Don't forget to subscribe and listen to more interviews from the humanist on your favorite podcast app d, who is a solo project created and produced by just one person me. To keep enjoying new episodes and content regularly, please show your support now at hooman.ist/support, or just sign up to receive the free Weekly Digest on hooman.ist/subscribe.