Rate, review and subscribe to Equity Mates Investing on Apple Podcasts 

Expert: Kenneth Stanley – AI researcher on how innovation occurs

HOSTS Alec Renehan & Bryce Leske|24 November, 2022

Kenneth Stanley is a former Professor of Computer Science at the University of Central Florida and most recently led the Open-Endedness Team at OpenAI.

This interview will open your mind on goal setting and success.

Kenneth isn’t an investor, he’s deep in the AI space … and for all of us, understanding where the world is at with AI will definitely help our investing decisions now … and for the future.

Book: Why Greatness Cannot Be Planned: The Myth of the Objective – Kenneth O. Stanley

*****

Looking for an investing book gift for a loved one this christmas? Order ‘Get Started Investing’, written by Equity Mates Alec and Bryce. Available on Booktopia and Amazon now!

If you want to let Alec or Bryce know what you think of an episode, contact them here

Stay engaged with the Equity Mates community by joining our forum

Make sure you don’t miss anything about Equity Mates – visit this page if you want to support our work.

Have you just started investing? Listen to Get Started Investing – Equity Mates series that breaks down all the fundamentals you need to feel confident to start your journey.

Want more Equity Mates? Come to our website and subscribe to Equity Mates Investing Podcast, social media channels, Thought Starters mailing list and more at or check out our Youtube channel.

*****

In the spirit of reconciliation, Equity Mates Media and the hosts of Equity Mates Investing Podcast acknowledge the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respects to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people today. 

*****

Equity Mates Investing Podcast is a product of Equity Mates Media. 

This podcast is intended for education and entertainment purposes. Any advice is general advice only, and has not taken into account your personal financial circumstances, needs or objectives. 

Before acting on general advice, you should consider if it is relevant to your needs and read the relevant Product Disclosure Statement. And if you are unsure, please speak to a financial professional. 

Equity Mates Media operates under Australian Financial Services Licence 540697.

Equity Mates is part of the Acast Creator Network.

Bryce: [00:00:15] Welcome to another episode of Equity Mates, a podcast that follows our journey of investing. Whether you're an absolute beginner or approaching Warren Buffett status, our aim is to help break down your barriers from beginning to dividend. My name is Bryce and as always, I'm joined by my equity buddy, Ren. How are you going? 

Alec: [00:00:31] Very good, Bryce very excited for this episode and this interview that we've got coming up. It is a real privilege that you and I have here at Equity Mates. So we get to, I guess, speak to experts all around the world, get to learn in public. And this is certainly an interview where we learnt a lot. 

Bryce: [00:00:47] Absolutely. Opened up my mind to a whole raft of things to ponder. We were lucky enough to be joined by Kenneth Stanley, who is a former professor of computer science at the University of Central Florida and most recently led the open ended ness team at Open Air. Kenneth is the co-author of Why Greatness Cannot Be Planned The Myth of the Objective. And We Cover a lot of ground. 

Alec: [00:01:12] We first came across Kenneth in Patrick O'Shaughnessy podcast, and it was a fascinating conversation and one that we wanted to pick up on and run with, especially because Bryce, you are the ultimate planner. Yes. Ultimate strategists. Yes. Goals on goals. On Goals on goals. 

Bryce: [00:01:28] Love goals. Love objectives. So this interview opened my mind to a new way of thinking. 

Alec: [00:01:34] Really? Yeah.

Bryce: [00:01:35] Yeah, it was. I'm not saying that I'm going to change how we run anything, but it opened up my mind to a new way of thinking. [00:01:42][7.0]

Alec: [00:01:42] And so for Equity Mates, who are used to straight investing content, this will be a little bit different. We do speak about some well-known companies. We speak about Apple, we speak about Tesla, Weinberg. We speak about a very interesting bit about Mark Zuckerberg and his plans in the metaverse. But Kenneth isn't an investor. He's an A.I. researcher, a former computer science professor. And that's really where we start. We start with the book. We go into A.I.. Yeah, for me, it was a fascinating conversation. AI is not coming. It's here. We're seeing more and more, I guess, use cases for it. And so this is a really interesting conversation with someone who's on the front lines. 

Bryce: [00:02:22] Absolutely. It is our pleasure to welcome Kenneth Stanley to the studio. Kenneth, welcome. 

Kenneth: [00:02:27] Thank you. Very happy to be here. Thanks for having me. 

Bryce: [00:02:29] So Kenneth is a former professor of computer science at the University of Central Florida and most recently led the open ended ness team at Open II. Kenneth is the co-author of Why Greatness Cannot Be Planned The Myth of the Objective. And to tease you guys. He's currently starting something new, and that's all we can say for now. So we might be able to get into that a little bit later as well. But I'm very excited for this interview. As Rand said, we're going to be covering off all things A.I. what the future looks like, as well as the objective paradox, which is where we'll start. 

Alec: [00:03:03] Yeah. Kenneth, we want to start with your book and the objective paradox. And when I first came across this idea, I loved it. And partly because Bryce is a meticulous plan. Bryce loves a timeline and a strategy and a spreadsheet. And I came across your book, introduced this idea of the objective paradox and really explained why greatness cannot be planned. And so we really want to start there. And let's start with that premise. Why can greatness not be planned? Yeah. 

Kenneth: [00:03:34] So this is a very counterintuitive idea that kind of ran across really serendipitously by doing research in artificial intelligence. Normally you wouldn't expect that, but that research in artificial intelligence would lead to general insights about life for pursuing your objectives or things like that. So it's quite surprising, but it exposed this research expose this this problem, that setting an objective, which if you're wondering why we comping the AI, it's that basically that's what algorithms always do. Our machine learning opens almost. We say, here's the objective, as in like this is the thing that I'm trying to get the system to learn to do and then try to move towards the objective. And so this kind of things that we're really used to doing. And what we discovered is that sometimes setting an objective can actually be really bad for you. In fact, setting an objective can prevent you from achieving the objective, but not only prevent you from achieving the objective, but also prevent you from achieving anything else that might be interesting that you could have achieved. So it is quite a serious issue, you know, because if this is like a ubiquitous principle, like you said, I sometimes call it the objective paradox. Then this affects like throughout our culture, like the way that we do things and as individuals, as institutions that you just saturated in objectives. And so we need to, I think, try to grapple with the implications of this, if this is right. 

Bryce: [00:05:00] Yeah, it kind of freaks me out thinking about this because as I said at the top, love objectives, love goals. So I'm interested to unpack this a little bit more before we do. Are there any sort of clear examples that you could use to help? Illustrate this premise both in A.I. and perhaps more in general sort of society. 

Kenneth: [00:05:18] Yeah. So maybe I'll take a general say just to make him as general as possible. So one thing that I should note, which is kind of a caveat to this point, is that when I'm speaking about objectives here, I'm speaking about a really ambitious agenda. That's why I like the title of the book is Why Greatness Cannot Be Planned. So like things that are modest, they do work as objectives. You know, you can continue to have your planning book and things like that and then have it usually work if you're just trying to do something modest. This is about innovation and discovery, like really kind of blue sky type of things, things that we wish we could do, but we don't know how to do them. Like something like curing cancer or creating artificial general intelligence really out there stuff as an individual, it might be something like, you know, making $1,000,000,000 or might be something like finding love. So like there are things where there's not an obvious path. If it's something like, you know, you want to lose weight. And that's a kind of a goal that while it may feel ambitious to you, it's actually something that many people have done. And there are known stepping stones to follow. So that's not the kind of thing where I'm saying you shouldn't have objected. But so where we're talking about this blue sky stuff, so let's think about something really blue like go back to say the year, let's say 1850, and let's say that the objective would be to build a computer where people are actually talking about computation around those dates. They didn't have any idea how to build something like that, at least not something that was practical, although some kind of mechanical ideas that were fanciful. But let's imagine, in order to build something like the digital computers that we have today or even say in the 1950s, we're in the year 1850. Now, the premise here is that that would actually be a bad idea to have that objective, to set an object and that clearly blue sky. It's very ambitious and there's nothing like that in the world. You can imagine a machine that does all this stuff by itself in the year 1850. But you might say, well, if we did realise we could do that, which obviously can be done, why not just get all the geniuses of the day together and just have a good project? We can just do it right, like why wouldn't we do it? That we should have just done it earlier so the internet would be a lot better by now. Well, so the problem is here's the problem. It turns out that people at that time were researching something called vacuum tubes and there were a lot of smart people researching vacuum. This was for the purposes of doing electrical experiments, like learning about electrical properties. And of course, we've heard of vacuum tubes. An interesting thing about vacuum tubes is that they were inside the first computers. And so you basically needed them to get to the first computers, the first computers of the 1940s, like the ENIAC, for example. And so people were looking at vacuum tubes. But guess what? Here's the part where the paradox comes in. The people who were looking at these vacuum tubes were not trying to build computers. And you see, here's a problem, because like, if we took those people who presumably are generally pretty smart people and we said there's a better thing to look at, like these vacuum tubes, they're cool, but they're kind of boring if you compare it to like a computer, like, way more cool than a vacuum tube. Well, then you take them off the vacuum tubes. And here we have a problem now because like the stepping stone that we needed to build computers no one is working on anymore. And now they're actually working on computers, but they don't have the thing that they need to build it. So it is an active utility and none of it will be anything. And so we've destroyed our ability to get to computation by making everybody work on computation. And this is an example of the objective paradox. And what's so disturbing about this is that this is a general issue across all inventions. I would even claim it applies to every single interesting thing that's ever been invented. Now, if you look back like you take any invention, you look back a few years back, it might look like there was an objective and it worked out. You know, you could give examples and I can always try to back up and show you how it doesn't really work that way, but at first it looks like it does because that's the mythology that were fed like everybody just as a, like a really strong personality sets an objective and just like definitely moves towards the objective. But we can do this. But the thing is that actually if you go back far enough, there's a stepping stone that doesn't make sense and this will always be true. And if you had said at that point that we should be working on this objective, that stepping stone would be eliminated and it would be impossible to have happened when it did happen. And so the real problem to generalise it is that search, search, meaning when you're looking for things that will help you to get to where you want to go. So you think of it like a search process in the field of word searches, like a really common word that's used a lot. We think of search algorithms, but you think of just searching informally. It's just like you're trying to look for the path to where you need to go. Well, the problem is that it is highly deceptive in complex spaces. And what deceptive means is that the stepping stones that lead you to where you want to go don't actually resemble where you want to go. Now, if you think about it as totally intuitive, because if they did. To resemble where you want to go. Well, then it would be easy and you would just go there. So we would solve the cancer and would be cured. Clearly, the cure for cancer doesn't look like the stepping stone that leads to the cure for cancer doesn't look like the cure for cancer. In other words, it may come from a totally different field. It may not have to do with biology or medicine or anything like that. We don't know where these things are going to come from, and that's just a property of all complex spaces. It's almost like a truism because like, if it wasn't a property, it wouldn't be called complex or hard. It would just be called easy. And so these are the problems we actually care about that are the hard ones. And so deception means the stepping stones are surprising, which means that if we set objectives, we're going to blind ourselves to the potential stepping stones, because they're interesting for orthogonal reasons. They're not interesting for the same reason as the thing that you're going towards, and therefore we're going to ignore those things. And so that's the objective paradox. 

Alec: [00:11:07] Yeah. When you start to explain it, it doesn't make so much sense. And I guess the question then becomes for, you know, people who you know, you mentioned trying to cure cancer a few times for people who are working on that problem and dedicating their life to that problem. What's your advice to them? And I guess to even bring it back a step, you know, for people listening who are trying to build and grow their own business, what's your advice to them if they shouldn't be setting those big, hairy, audacious goals? 

Kenneth: [00:11:40] Yeah. So my general advice is if you want to be involved in Blue Sky Discovery in general, which means it doesn't necessarily mean just invention. I mean, it could mean actually finding something really meaningful to you or, you know, it could be an artistic endeavour. There's all kinds of blue sky discovery that you can have in life, but whatever that is, it's something where you don't really know how to get there if you want to, or it's something that no one knows how to get there. Then if you want to do things like that, which not everybody does, that's fine. Like if you want to play it safe, like do things that everybody knows how to do. Like if you want to get a degree as an accountant, you could do that. You know what to do with the stepping stones, it's not because it is a bad thing to do. It just happens that that's not super ambitious because we know how to do it. But if we're thinking of if you want, do something where it's like a big surprise, nobody knows how to do what's going to be really like a revelation. Then the best thing to do is follow your gut about what's interesting. But the thing to understand here is that where that leads is not necessarily to any particular point. So you can't. I think you can't say, here's this one thing, which is the thing I'm going to achieve in my life, and everything I'm going to do is going to be synchronised with that thing. If it's super ambitious, like that's just is on principle because of the objective paradox, which you can do though is you can say, I will follow interesting stepping stones, which I don't know where they're going to go and I'm going to follow them anyway because I, I trust my instinct for the interesting. And then there's a good chance that you will encounter something really interesting in the end, but it won't be something that you could predict. So it's not this thing that you planned out that you're going to get to. It's also important to note, though, that it's not like this advice says that you should be confident that you are guaranteed to get something special. There's no guarantee. Like, obviously what we're talking about entails risk. The idea is that the opportunity is created for having something really valuable happen by following stepping stones that are interesting, but it's not guaranteed. And like, there's no way out of this trade-off like if you want to do something amazing, you have to take risks. And if you don't want to take risks, well, then you should do something that's not amazing. And that's okay because you don't want to take risks, but you can't get out of it. There's no way to have zero risk and do something amazing. So this is an exploratory type of thing. And the other part of it is that this word is interesting and plays a really strong role in this. So, you know, it's as follows: interesting stepping stones. One of the principles here is that the reason that we're able to get to things like computers or like flight or these like amazing achievements that we've had is because people did pursue stepping stones for orthogonal reasons. But those stepping stones were not pursued because of those final achievements. They're interesting because they're interesting in their own right. And so it's the instinct for the interesting in the here and now, which is different than saying where's it going to go? It's more like, where have we been? And How is this different? In a unique and interesting way, that instinct is very important for exposing stepping stones that might be useful in the future, even though we don't know how they're going to be useful, just like the vacuum tube. And so the last part of it is like you're asking about something like within a company or something like this or like if you do want to cure cancer, what do we do? I think that this really forces us to grapple with uncomfortable truths because, like, it does suggest that if you just set up a big organisation and its goals are to cure cancer, there's not a very principled thing to do. Of course we don't want to hear that because we try to basically invest in curing cancer, what we can do, which is like the next best thing. To actually do something that's principled is to explore interesting stepping stones in the adjacent spaces, but without this expectation that the cure for cancer is the thing that we're moving towards necessarily. I think artificial intelligence is similar in that it isn't harmful to explore around the space like alternative intelligence systems. But we might admit that we have no way of knowing that these are going to lead to artificial general intelligence. We don't know human intelligence, but yet, like exploring in this adjacent space, there's still likely to reveal some pretty interesting stepping stones, which can still lead to things that are valuable. But they may not be that thing that is similar in the world of cancer, a disease that we may not be able to plan, like how we're going to do this, like cure cancer, but we can like basically expose a lot of stepping stones in the vicinity and that could lead to other really valuable things. Or some day long in the future, it might become clearer what a path might be to get to the cure. 

Alec: [00:16:12] It just makes me think that, you know, a lot of scientific research is very directed and, you know, a lot of grants are very directed. And you have to be like, you know, I'm doing this for this reason, not just because I'm working on it, because it's interesting. Is there like these findings, does it reveal maybe like a challenge with the way that we give grants and that like charities allocate money to different research projects? 

Kenneth: [00:16:37] Absolutely. I think that the grant making industry, if you want to call it that, or the institutions of grant making, are probably some of the potentially biggest beneficiaries of this news, if they would absorb it, because they're just severely violating the objective paradox the way that they think. And this is ubiquitous across the world and it's a serious problem for these kinds of industries. Like, you have to understand that, like in some industries, objectives are less toxic because the industries are more oriented towards relatively modest aims. And so it's a very this isn't as much of a problem. So but in something like grant making, the whole thing is about blue sky innovation. Like that's the whole point is to fight to discover things that we never could discover if we don't know how we're going to do them. So this is very, very relevant and salient. And so, you know, if you look at and I have a lot of experience with grant making because I was in academia for like ten years and I was a professor and in that time I of course had to request money and write grants to grant proposals. And so what happens in that process is that you write a proposal which is usually what you're expecting, and they ask you to say what your objectives are. So there's problem number one, you're basically telling them an objective in an industry where it doesn't make sense to have an objective because of the objective paradox, what it forces you to do if you really want to achieve your objective, like if you want to be honest about it, which some people aren't, you don't really expect to achieve their so-called objective. But if you do, then you're going to have to propose something that's easy. And then we don't have ambition and like real innovation happening. We have a status quo type of mediocrity. That's problem number one. And then problem number two is how it's judged, which is basically there'll be a committee that tries to come to consensus over whether your objective is realistic and valuable. And so the problem with a committee, which is it's a very objective oriented form of structure, social structure, because it moves toward especially when it's moving towards consensus, because basically there's an objective notion of what's better and what's worse, like this objective of the best idea, the best idea. And when you move to consensus, you just suck out all the diversity. You know, think about it. Like if I have something that's interesting, then I should be splitting the expert opinions and not gaining consensus opinion. And also what consensus leads towards convergence. You know, it means that we're moving towards an agreement. And so if there's five people in a room and each one of them has something that they really believe is interesting, that's different if they have to move to consensus, we're not going to get any of those five things. We're going to get some washed out, happy medium that none of them find fascinating at all. And this is how we're deciding is how we're deciding what should be funded. And so we are getting wash out effects across the board like things that are interesting to anybody because we have to come to consensus. Like, the thing about stepping stones is we want diversity. You see, the thing about stepping stones is that what gives power to a system in aggregate, like an innovative system like, say, the scientific community is that there's lots of stepping stones there. The things that you can use to get to the next stepping stone. If there was only one stepping stone in the world, there would be many places we could go because you can only go from that one stepping stone. So it's the fact we have all these stepping stones, which you could call a repertoire or an archive. It's like all the achievements of humankind over the aeons. Like those are the stepping stones at our disposal. That's why we can get to so many places. But when you go through. A filter of consensus. What you would you do converging down and getting rid of the diversity in the thing is like the same that's for you individually like really triggering like the thing that could lead you on like an adventure of serendipity is just unique to you. It has to do with your background. It's unique to you. A separate committee of people will not actually have the same thing that can lead to their serendipitous journey, and so they can't tell you what's going to be really important to you. So the whole thing is just totally unprincipled for the purposes that it supposedly set out to achieve. And it should be, in my opinion, completely reworked from the ground up. Let's all keep some for doing the things that conventionally I mean, if you're scared, you want to give up your objective security blanket. Fine, let's have like 50%. I don't know. It doesn't matter. Exact percent just do things the old fashioned way. You find like I'm telling you, it's not that great, but you'll get your mediocrity and it usually will work. But let's start allocating some resources and not be everything towards doing things in a more principled way. And that's what I think we probably should do. 

Bryce: [00:21:16] So again, as investors, we spend a lot of time listening to management, assessing what they're saying, talking about their strategic objectives over the next three, five, ten. You know, Zuckerberg talks about 30 years. If they're standing up there, though, and saying that I'm just going to follow my instinct and follow what's interesting. And as a company, we're just going to uncover the next stepping stone that can be a little off putting for people that are looking to invest in what they're saying for the future. So based on your research, what would you like to be hearing from these CEOs and company leaders that kind of gives you the impression that these leaders are more likely to find that blue sky innovation than the business leader or entrepreneur next to them. 

Kenneth: [00:21:56] So two two things. The first thing is that there's a special case which is important to identify where it's okay to identify something that sounds really ambitious from a leadership perspective, which sounds objective. And that special case is if it's actually only one stepping stone away, like if it is actually possible, we actually do know how to do something. But the special case is special because those cases actually still require a lot of insight because what they represent is the recognition that things have changed in the world fundamentally. In other words, sometimes something snaps into possibility because a new stepping stone has actually been achieved, maybe somewhere else, maybe within the company, maybe somewhere else. But something novel has changed in the world. We've seen things like this. For example, if you wanted, just example, if a recent one would be the image generation technology that it's hot now. And I like that people recognise there's a new stepping stone, like we're not totally sure of all the things it's going to lead to, just the whole point. That's why it's interesting. But like we recognise it's going to be dystopic and now someone is going to come out there and say things have changed and then realise what it actually does lead to. In other words, we don't need new technology, that is the new technology, but it enables something that we didn't realise was possible before then. That's I think that's a great foundation for doing something innovative and maybe even a new venture because it's like you realise before anybody else that something stepped into possibility and it's not like there's no, I know, what is the idea? We're going to do it like we can do it. And you can convince an investor, for example, you can venture for it or something like We could do this. And then that's just principle. It's only one stepping stone away. So the things that I'm talking about are more than one stepping stone away. But those special cases are important because I think it accounts for most of what we call visionaries, but people who actually created something like that seems incredible that we assign this mythological story as like these amazing visionaries. You could see multiple stepping stones away and it got to it like I think the truth is, they didn't see multiple stepping stones away. No one can do that. There's no one up and impotent like that. What they did see is that something just snapped into possibility. And so that something like Steve Jobs in an iPhone, you know, if the technology was all there and you've been exposed around Apple to it, like for years, you've seen it. And so you just realised that you could build this incredible, magical thing. Like right now it's actually possible and obviously of course more goes into it than that sort of design and stuff like that. But that's the real thing that's going on there as opposed to it's like we're in the Stone Age and somebody is like, Let's make an iPhone. That would be very visionary but impossible. So that's one version that we need to be sensitive to because it's actually something we should invest in, I think, and be aware of people who are like that and have that form of, I think, realistic vision. It's the snapping into focus type of situation. Now the other side of it is this: what do we do with the problem of fostering innovation when there isn't a situation like that? But we do want a blue sky. Like, how do we approach that kind of a thing? We need to be careful. To disentangle the essential functions of any enterprise from this kind of blue sky exploration and say, first of all, would you preserve essential functions like we don't want to overhaul everything into this giant blue sky kind of treasure hunting search? That would not actually work because we still need two different functioning organisations and there may be things that we do that we need to keep doing to make money or it depends on where we are. But then there are other organs of an organisation that should help you to, to do this kind of blue sky exploration in the way these actually principle. And it's important then to try to follow what is what is principled, which is not the objective paradox, that you don't want to follow that kind of thinking, but that's the temptation. So you get things like, for example, you get these things like in-house innovation labs, research labs, like industrial research labs, which they build themselves as, like this is our innovation centre. Like, here's where the ideas are. We try things, we're open minded, blah, blah, blah. But the truth is, like, you go to those things and somebody somewhere probably talking to the head of the lab is saying, Oh, you know, how is this going to affect the bottom line? Like, write a report, write a report to us. And we just want to sound like we don't want to do research. We do. But it just shows us basically how we're going to fulfil our objectives because of all the stuff you're doing. And suddenly it's so, but it makes it transform back into an objective organisation again like the Innovation Centre itself is now an objective organisation and you're back to mediocrity. Everybody's thinking like, I got to justify myself with an objective that's aligned with the company and where it's going and so forth and so on. And finally, we should, we should keep in mind that it's essential to survival for I think especially larger organisations to have some innovation organ like that's really important because there's constantly this threat of disruption, especially of technology. And so you do need to think about this. What we end up with is we end up with this kind of mediocrity, pursuing like veneers of research that are not really doing anything innovative. And often there sometimes is real innovative research. I'm not going to be ridiculously pessimistic here. It does happen, but it usually happens despite that, not because people find a way around, you know, it's like I mean, professors do this to like they know how the grant system works and that is utterly ridiculous. So they write things to fit the model. They just do what they really want to do. They get the money and it is an open secret. And so like, you know, people step around the system and find ways. So innovation is still happening even in these orgs that I'm criticising. But if you want it to really go for it, like make it so that it is explicitly about what it's supposed to be about, then I think you've got to completely overhaul that idea. Like you need to free it from the objectives. And then what you're doing at that point is letting go of knowing how it affects the bottom line or even how. I mean, you don't know that like it may affect the bottom line, but in a completely different way from the way you think about your strategy going to do is a completely different thing, which even go into a new business you're not even thinking about, which will be revealed because of the circuitous path of the stepping stones that are unveiled. And so that might make some leaders feel cold feet and like, well, that's really random. Like, we don't know what to expect, like should we invest in something like that. But again, you have to remember, there's two things here that are important. Remember, one is that we are still exploring the vicinity of what we find interesting in that company. You know, it's not like you're just going to get a bunch of researchers and they're going to all go start like organic farms out in some other state or something like that. Like that's, that's not a problem. Like they're still going to be looking at real questions and stuff like that. So you're still going to get the type of stuff that is of interest to the company. Then you have to remember that what people find interesting is what guides us through non objective search landscapes. So it's not like we're just doing random things, is it? The people there are researchers or innovative people. They're making decisions about what to look at, what steppingstones to look at based on experience. Generally that means hiring people who have good experience because they have good intuitions about what's interesting. And so you don't need to be as afraid as those people will find interesting things. That's what happens when you free people up. You actually have a good sense of what's interesting. But what we're afraid of and the reason that doesn't happen is that nobody really trusts anybody. We decide what's interesting. We want everything to be held to quantifiable, accountable assessments and metrics, and if we don't have that, we're freaked out and we can't handle it. And the problem is that it is not amenable to metrics. You know, you can show me a graph and show how interesting this is going up because it's not an objective thing. It's a subjective thing. But that doesn't mean that it's random or a principle. It just means that it's too complex to quantify because it's basically like everything that you've ever experienced in your entire life is coming to bear on what you find interesting. And we have to respect that because we've invested, you know, decades of education with you to get to that point. And that's because I presumably am. Just so you can look at a single metric and decide if it's going up, because that's something that somebody could do in first grade. The thing that makes you special at the point where you like, you know, mature, professional, is those 30 years of experience developing an instinct for what's interesting and what's not. And we can even talk about it. You can tell me why. It's not like it's all just like private stuff. Like you can't explain why it's interesting and I can't listen to you and we all have to act ignorant. Let's talk about it for several hours. You can explain like this is really interesting for all these reasons, but forget about telling me where it's going because we don't know. 

Alec: [00:30:42] You mentioned Steve Jobs. You know, that was one stepping stone away. The other one that comes to mind. Elon Musk, electric cars, rockets. You know, we say it is really visionary. But, you know, it was one step away. The technology was there. Bryce asked about, oh, he mentioned Mark Zuckerberg. And it got me thinking. Do you have thoughts on the metaverse? Is metaverse one step away or we are? What's your view on all of that? 

Kenneth: [00:31:06] Yeah, these are great examples because I think they show us how to apply. This kind of objective paradox lends to analysing these questions, you know, like something like is the metaverse like the first type or the second type of situation? Like is it one stepping stone away or is it a very ambitious, multi stepping stone objective, which is not a smart thing to be pursuing? And, you know, of course, no one can know for sure. Mark Zuckerberg would I'm assuming, presumably if you listened to my argument and agreed with it, he'd say it was fixed in a way because he had no other choice. Otherwise you basically say, this is stupid. You know what is helpful is that we can now think about it that way. So basically, if I try to think about it that way, you know, I'm saying that the technologies that are necessary steppingstones to ubiquitous adoption are like a metaverse, really just like right here, right now. And again, I don't have any. I'm not an expert in VR, but my instinct is that it's not not here. What I think would be ubiquitous was if the VR was so convincing that it basically is indistinguishable from real life, then I think, yeah, that I could see mass adaptation, no question. But technology looks to me like not even close. I can't even imagine what that even looks like. I don't even know if it's a headset or what that is, but it's not even close. And so I think it's modest, multiple stepping stones away. And I think that Mark Zuckerberg has kind of hitched the ship to something that is extremely risky because it's multiple stepping stones away. And still I still respect at some level doing it because I think it is respectable to take risks with no one if it is a case where people don't do this, but it's related to when I say this is interesting, so we should do it. And then the boss says, no, you know, I don't know how I was going to work out. And so doing nothing. And I think it's kind of cool when somebody actually says, let's actually just do it and try it. But the thing is like the motivation to me does seem unprincipled because he has an objective not just saying, it's not just things interesting that suit it. I think we could look at a different way, which is that it might just be interesting to look at all this stuff, but it won't get to what he thinks it's going to get to, you know, so like we're not going to get ubiquitous metaverse. Forget it. It's not coming for the next ten years. And by the way, that to me will come in the next hundred years and maybe come, but just not anytime soon. But even though we're not going to get there, we make it somewhere else because it's such an expansive thing and there's so many stepping stones being unveiled and like we're investing so much money in some of the things on the road that are going to be valuable. And so that's still possible. You know, it could be it's not it's an all or nothing kind of situation. But, you know, on the face of it, I'd say it's not that it's not a really great objective, but because it seems to be too far away and we see examples of this all the time. You know, it's all about timing because you've got to know if it's one stepping stone in a way to do something like this. So like self-driving cars are another one, you know, go back to 2017 or so, like everybody's saying, self-driving cars, some people are saying right around the corner, including Musk, although I must have been right about something. So this is not an overall personal attack about it in general. But on that one, you know, it's just not right around the corner. Like technology wasn't one stepping stone. We were overestimating what was there. And here we are five years later and we still don't know when this is going to happen. And, you know, artificial or artificial general intelligence, like the kind of holy grail data is also like this, where like their entire organisations and companies that are focussed on this and because it's just full of interesting stepping stones, much value will be created, but whether it will achieve that like that is an objective kind of a question. This completely jury is out on that. There's no way of being confident about that. 

Alec: [00:34:45] Well, Kenneth, we want to turn to AI, but before then, we will just take a quick break to hear from our sponsors. So, Kenneth, before the break, we were talking about the objective paradox. And we really want to turn to air now because you've been and I research at universities and in organisations for a number of years now and I guess when we were preparing for this interview and thinking about the objective paradox in your book, it made us think about a lot of the research that goes into a lot of the funding that goes into research. It often feels like there is a clear objective. And before the break, you mentioned the objective to get to artificial general intelligence. How do you think about your findings with the objective paradox and how that translates to a lot of the research out there at the moment? [00:35:35][50.3]

Kenneth: [00:35:35] Yeah. It means that we should be circumspect about our assessment of where we are relative to these kinds of holy grail expectations. That's not necessarily a critique of research and the extreme amount of value that's being uncovered by it. All of the leading organisations have much runway left to uncover, all kinds of amazing stuff. But the question here is are we going to get to AGI? That's just like we don't know anything about what the stepping stones are like, that guy. Now, there are some people who think we're one stepping stone away. So there are people who think that, you know, if you buy that, then it is as follows that you should invest to get there. Most people don't know all the stepping stones. I would agree that you don't call it the stepping stones. So it's like there's a few major ideas left necessary before we get to human level type of stuff. And so it's totally speculative. Like when somebody says, you know, it's right around the corner and I mean, no one has a clue. The unfolding of history is very unpredictable. So it could be that, you know, it might happen within the next 20 years or something because some major breakthrough happens that we don't know what it is. Some idea like Einstein, the whole idea. But it's not the kind of thing that you can facilitate or predict. Like, you can't say this is when this is going to happen. If you put this much investment into it into and we don't know how many Einstein. My point is we need many, several in a row. I mean, it doesn't you know, I look at it as something where I basically don't know. This is a position I don't hear that often from experts. Usually they have a strong opinion like this isn't working. It is working like, okay, I don't know, like we just this is this is an objective paradox situation. We don't know what the stepping stones are. And all I can say is that adjacent areas are interesting, but whether we get to the Holy Grail, we have no idea right now. 

Bryce: [00:37:33] Second, we're seeing some really exciting recent developments in AI come through, including GP three or sort of natural language processing for those that are listening along. What are you most excited about at the moment? 

Kenneth: [00:37:49] What's really promising is this idea of amplification of human creativity in general at the moment with current technologies, current technologies, they're not like people. Like I don't think of them as like, you know, like slightly stupid people or slightly younger people or something like that. They're just not like people. It's just not like a show or something like that. Any of these languages, Mozart, like something that I don't have a clear analogy with in humanity. It is something. It is something fairly alien which does do interesting things, but it's got like these holes that we still don't fully understand because these are conceptual holes. There are things that are hard to articulate because many of them use words that don't really exist in our normal experience of life to interact with a being who has these holes. So they're not fully understood yet and they seem to relate to really getting into a heavy analytic kind of logical deduction that it's just things break down, they're unreliable in ways that can be unpredictable. And this is therefore concerning currently. The thing is, though, that they have enough of the veneer of human intelligence that they can reveal things to us that clearly are revelatory, like the art stuff. You know, there's two things of value that are being generated in language generation. We don't want to leave it alone. I wouldn't want to be friends with something like that or try to become friends in concert with humans. Interesting things can come from that. It's an ideation machine if used carefully and right. And so this is actually, I think, a really kind of virtuous use of A.I. is in the process of human ideation, because if you think about it, like there's a depression like version of AI where it takes over for us and then we're just basically like toys or pets or something. They get taken care of. Like, we don't really have any reason to be here anymore. We have no point. Because every day anything we create sucks compared to what it would create. So no one's got to pay attention to anything a human creates anyway. Like, it's kind of depressing. Like, it's basically no point. And I don't think we as human beings just thrive only on consumption. We just want to consume media all of our lives. It never outputs anything like output is really important to wellbeing I think, and production. And so in some ways like the idea that it becomes a companion, but where the first class part of it is us, that's to me more virtuous in a way, because it preserves like with the needs of human nature, ah, which is to be producing things but just facilitates and amplifies that ability. And we have this opportunity. It's, it's clear it's starting to happen that people could participate in areas where they couldn't have before, like art, from music or things like that, because of having these facilitators, these aims that help them to express themselves in ways that you would have had to have a lifetime of professional experience to do before. And that does seem exciting to get even more specific. Like one thing that I mean, we've seen a lot of art, but men's music is just ripe for it. It's going to be really interesting. I think if for example, you on your own Schumann, you're not a professional musician, could just sit in your bathroom and sink into your phone and then like a few minutes later you've got a fully professionally produced Top 40 rock song, you know, that would be absolutely disruptive in Spain. And I think it's within the balance of the kind of stuff that we're seeing that things like that are possible. And so there's going to be some really interesting stuff and that's on the horizon. That's the kind of stuff that makes you excited. Wow. 

Alec: [00:41:31] Yeah. Wow. So you think there's a lot of music on Spotify now? Just wait until music comes. Oh, well, Kenneth, we have almost run out of time. And so we want to, first of all, just say massive. Thank you for joining us today. One final question. Another thing that Bryce and I were sort of thinking about as we were preparing for this interview, it feels like in our lifetime, maybe, maybe in human history, there's never been more divergent possible outcomes. On one hand, it could revolutionise every industry and benefit humanity and society in new, numerous ways. And on the other hand, it's perhaps the biggest risk that humanity has faced. And it could be existential, like it doesn't feel like with any other technology there's such a divergent set of possible outcomes. And then when you overlay the objective paradox where you can't create objectives in innovation and development is often a set of unintended consequences. How do we properly manage the development of AI, the risk of AI when we can't set clear objectives and it's often the unintended consequences that matter? 

Kenneth: [00:42:41] Obviously the answer to that question is extremely complex, so I don't have a full answer to it. But I do think that these insights about non objective search the objective paradox when greatness cannot be planned, do actually bear on that question because it relates to open ended systems, which actually that's the area I come from within an approach, open ended systems that continued to create without bound. And you don't necessarily know where they're going, necessarily know where they're going. Civilisation is an example of that. Evolution is another example, natural evolution. And so what I think we need to do is look to other open ended systems to understand what we're getting into here. Because I think what we're not appreciating is that what we are creating is more of an extremely rare kind of phenomenon in the universe. It doesn't happen very often that there's an actual open ended complexity explosion. One is natural evolution, going from single celled organisms to everything that's on Earth today that's alive. But the other thing, like civilisation, it's pretty much all of the inventions of all of history, which includes not just technologies but artistic inventions and so on, social inventions, democracy, things like that. All of that is just civilisation, the process of civilisation. That's an open ended system. And if you look at things like that, these open ended systems are extremely unique and unfathomably powerful. You know, like the thing that created all of living nature is literally biblical in terms of what it did and you know, like civilisation because everything that you look at out the window or inside your room, civilisation is there. These are not normal things to be created. It's not like a new kind of oven or something like this is actually a process that's being created. And so to understand processes and the risks that they entail, we have to look at examples of other such processes. And it shows this is a great example because it's very similar to what we're creating, you know, in some ways, like we're trying to recreate civilisation because like if you create A.I., it is going to immediately start interacting with society. It might create its own society of other eyes, who knows? But it's basically. A social process has been created. It's not just in the brain, in a box. It's a process that's been triggered. It's actually the continuation of an already ongoing process, which is civilisation. It's just going to be amplified now and sped up. And so we understand some things about these kinds of systems because if you think about it, you could ask the same thing about civilisation or about society. It's like, Well, how are we going to control? We don't know what it's going to do. Like, of course, like this is a huge problem like this. People are unpredictable and dangerous as all hell. And so, like, we have a lot to be afraid of when it comes to society. But we've actually created systems which are basically governments and institutions that try to channel all of that energy in a positive way on balance. It actually already grapples with a lot of these really difficult questions, like, for example, like what are human values? You know, people look kind of like tied up in knots talking like, how are we going to impart human values onto A.I.? We don't even agree on what human values are like. Real disagreement about everything. Well, the thing is, we've been grappling with that for thousands of years. This is like there's an imperfect way and maybe we can get better at it. But there is a way to try to converge to something where there's a degree of consensus that the outcome is somewhat reasonable, and we're going to have to deal with it in that way. Look, you're not going to impose what's right, what's wrong. This is so naive to think that that's going to work here. We're going to have to grapple with an open ended system in an institution controlled to create incentives very carefully. And what it will ultimately do is actually create a situation where some people will not allow machines to do some things because the risk will be too hard to the people, because the incentive system is set up. So people won't want to do those things. You know, you have to take responsibility when you commit a crime or if you do something really dangerous, you don't mean to hurt anybody. But if you do hurt people, what's your fault? You know you're going to be responsible. It still has to stay that way, even when there's alien thinking. And so we have to figure out how to arrange things in that way. So the incentive system goes down to a case by case basis and people on balance and the institutions are bent around all of the complexity of what's going on. 

Bryce: [00:47:02] What can you have left us with? A lot to think about, that is for sure. Have thoroughly enjoyed our discussion today, covering a lot of ground. And as people or as Ren and I try to grow our own business and think about objectives and blue sky. 

Alec: [00:47:17] No more goals. 

Bryce: [00:47:18] Blue sky thinking will definitely have. Yeah, as I said, left a lot to ponder. So for the Equity Mates community, Kennett's book is why greatness cannot be planned. The Myth of the Objective. If you'd like to go and find that and read that in more detail. But Ken, it's been an absolute pleasure. Thank you for sharing your time with us today. 

Kenneth: [00:47:38] Likewise. I really enjoyed being here. Thanks for having me. 

Bryce: [00:47:40] Thanks, Kenneth. 

More About

Meet your hosts

  • Alec Renehan

    Alec Renehan

    Alec developed an interest in investing after realising he was spending all that he was earning. Investing became his form of 'forced saving'. While his first investment, Slater and Gordon (SGH), was a resounding failure, he learnt a lot from that experience. He hopes to share those lessons amongst others through the podcast and help people realise that if he can make money investing, anyone can.
  • Bryce Leske

    Bryce Leske

    Bryce has had an interest in the stock market since his parents encouraged him to save 50c a fortnight from the age of 5. Once he had saved $500 he bought his first stock - BKI - a Listed Investment Company (LIC), and since then hasn't stopped. He hopes that Equity Mates can help make investing understandable and accessible. He loves the Essendon Football Club, and lives in Sydney.

Get the latest

Receive regular updates from our podcast teams, straight to your inbox.

The Equity Mates email keeps you informed and entertained with what's going on in business and markets
The perfect compliment to our Get Started Investing podcast series. Every week we’ll break down one key component of the world of finance to help you get started on your investing journey. This email is perfect for beginner investors or for those that want a refresher on some key investing terms and concepts.
The world of cryptocurrencies is a fascinating part of the investing universe these days. Questions abound about the future of the currencies themselves – Bitcoin, Ethereum etc. – and the use cases of the underlying blockchain technology. For those investing in crypto or interested in learning more about this corner of the market, we’re featuring some of the most interesting content we’ve come across in this weekly email.