Aemula is a new kind of media platform that’s trying to tackle a big problem: the fact that the structure of our news media leads to various outcomes that amplify toxic polarization. (Sign up for free at aemula.com.)
Instead of the usual “engagement = more exposure” logic, Aemula flips the incentives. You read an article, then you tap a simple Support or Disagree button — and those signals build a living map of Aemula’s community: a 3D social network graph showing how readers, writers, and articles relate without slapping on ill-defined partisan labels like left and right – labels that often unintentionally amplify us-vs-them, team-based thinking.
Aemula creator Don Templeman and I get into:
- Why left/right-type labels can be a misleading way to understand beliefs or categorize content
- How Aemula uses social network analysis to map out relationships and ideological groupings in an objective, data-driven way
- How Aemula’s social network can help define a sort of ideological center, and how promoting content from the widely supported regions of the network can help reduce polarization
- How the blockchain aspect of Aemula makes it self-governing and therefore infinitely scalable
- How Aemula’s approach could matter even more in an AI world, where chatbots and LLMs need better sources than “Reddit + Wikipedia”
If you’ve ever felt like the incentives of the media ecosystem seem destined to drive us further apart — I think you’ll appreciate learning about Don’s paradigm-shifting approach to the news.
Episode links:
- YouTube (includes video, recommended)
- Apple Podcasts
- Spotify
TRANSCRIPT
Don Templeman: “So we’ve created this explore page, which shows this perspective map, which is essentially showing how you relate to all the other users on the platform, articles, and authors that wrote them. So as you’ll see here, I’m this blue, white, blue dot and our authors on the platform are these orange dots and the dark gray dots are articles. And you’ll see other like white dots that represent other users. But this is a 3D graph that shows where you fall relative to these other authors and users in the space. And the reason we do this in a 3D map is we don’t want to try to collapse everything into just a left versus right thinking, because we are trying to reverse polarization. And what we find is you can find points of consensus between users that you may not typically agree with on everything, but you may share some close views on some perspectives. And we can use that to map out and find communities of ideologies and determine what are the best articles to recommend people to open new lines of communication between these different communities on the platform. So this is really a visual representation of how we go out and find articles that we think you’ll like.”
Zach Elwood: That was Don Templeman explaining some basics of content recommendation for his new blockchain-based journalism platform Aemula.com; that’s AEMULA.com. You can sign up for free for Aemula, and I recommend that you do, and hope that you do.
Aemula is a new kind of media platform that’s trying to tackle a big problem: the fact that the structure of our news media leads to various outcomes that amplify toxic polarization.
Instead of the usual “engagement = more exposure” logic, Aemula flips the incentives. You read an article, then you tap a simple Support or Disagree button — and those signals build a living map of Aemula’s community: a 3D social network graph showing how readers, writers, and articles relate without slapping on ill-defined partisan labels like left and right – labels that often unintentionally amplify us-vs-them, team-based thinking.
Don and I get into:
- Why left/right-type labels can be a misleading way to understand beliefs or categorize content
- How Aemula uses social network analysis to map out relationships and ideological groupings in an objective, data-driven way
- How Aemula’s social network can help define a sort of ideological center, and how promoting content from the widely supported regions of the network can help reduce polarization
- How the blockchain aspect of Aemula makes it self-governing and therefore infinitely scalable
- How Aemula’s approach could matter even more in an AI world, where chatbots and LLMs need better sources than “Reddit + Wikipedia”
If you’ve ever felt like the incentives of the media ecosystem seem destined to drive us further apart — I think you’ll appreciate learning about Don’s paradigm-shifting approach to the news.
A quick note: if you’re listening to this and not watching it, this episode might be rather weak, due to this being a visual-focused episode. If this topic interests you, I recommend watching this on youtube: my youtube is at youtube.com/peoplewhoreadpeoplepodcast.
I myself have been working on reducing toxic political polarization for more than five years. I’m the author of two books on polarization, which you can learn about at www.american-anger.com. I’m quite skeptical about our ability to reduce toxic polarization, as I see it as the result of so many nested and self-reinforcing cycles of contempt and anger. There are only a few ideas I’ve seen that have excited me and made me think: here’s something that is capable of shifting things in a big way; of changing the underlying social incentives in ways that reduce us-vs-them contempt and anger instead of amplifying it. And there are also few paradigm-shifting ideas I’ve seen that have the potential to actually be used by a lot of people and scale up and create big changes; some ideas seem good but require top-down enforcement to be implemented, whereas Don’s project is user-focused; a private market product that gives people what they want while also incentivizing better behaviors.
I think Don Templeman’s Aemula project is a great idea. I think it’s revolutionary, and paradigm-shifting, and I think Don is a very smart person. I hope he succeeds in getting lots of funding to build out Aemula. This is why I personally hope you will take a look at Aemula and sign up for it. It’s just possible it might be the future of how news and journalism is done. You’ll maybe look back one day and think, it was cool to be in on the ground floor when this thing first got rolling.
If anything I’ve said has intrigued you a bit, and piqued your curiosity, I hope you watch this episode of Don explaining how Aemula works.
And speaking of media companies having incentives to promote fringe, extreme, and polarizing content, the last episode of my podcast was an examination of the paranoid and insane content that Instagram has been promoting to me and others. If you’re curious about that, it was an episode I uploaded only to youtube due to it being so visual. You can find that on my youtube at youtube.com/peoplewhoreadpeoplepodcast.
Okay, here’s the talk with Don Templeman, founder of Aemula.com.
00:00:03 – Zach Elwood: Hey, Don, thanks for joining me again.
00:00:04 – Don Templeman: Zach, yeah, thanks for having me back on.
00:00:07 – Zach Elwood: I appreciate it. Yeah, I’m really interested in the work you do. So I thought maybe we could start with you walking through the Aemula login process and what you see there and then talking about the social network analysis and graph kind of stuff.
00:00:26 – Don Templeman: Yeah, happy to give an overview. I just requested to share screen.
00:00:31 – Zach Elwood: Okay, it should be able to do it now.
00:00:32 – Don Templeman: Perfect. I’ll pull it up and just start from the beginning. If people go to aemula.com, hit our landing page, and you can click start reading to sign in. If you don’t have an account, you just type in your email and we’ll create an account for you. I know a lot of people, they don’t like entering emails when they’re creating new accounts, but we actually don’t store emails on our end. So it just creates a hash. We won’t send you marketing materials or anything. It just takes a few seconds to set up. And what you’re met with here is a front page that’s curated just for you. Obviously, if it’s your first time on the platform, we’ll show you some high quality articles for you to get started. But importantly, the core of what we’re doing is trying to support independent journalism. So all of the articles you see, they’re published independently by the writers. They’re owned by the writers. They’re stored and served on a peer to peer network. So nothing is coming from our servers. and they’re recommended to you through an open source community governed algorithm because we’re trying to remain as neutral as possible as a platform just to give writers the tools to publish and report and readers the ability to have one subscription to access all the information on the platform. And the basic functionality is you go in, you read articles and at the bottom of each article, there’s just this little support or disagree button And after you read, you can determine if you want to support the author or disagree with them. And what we can do is we can link that to create a connection between you and that article and our system. And so as you begin to read and interact with articles, we can understand roughly what your point of view is, whom you typically agree with, and we can start to make recommendations that are close to your beliefs while still promoting articles that are more moderate or more widely supported by diverse user sets. which is how we determine quality. And this front page, it’s meant to just be like a quick, simple way for you to get in, read some articles that we think you’ll like, but we wanna give users more control to freely explore the articles on the platform and freely discover new writers and new perspectives. So we’ve created this explore page, which shows this perspective map, which is essentially showing how you relate to all the other users on the platform, articles, and authors that wrote them. So as you’ll see here, I’m this blue, white, blue dot and our authors on the platform are these orange dots and the dark gray dots are articles. And you’ll see other like white dots that represent other users. But this is a 3D graph that shows where you fall relative to these other authors and users in the space. And the reason we do this in a 3D map is we don’t want to try to collapse everything into just a left versus right thinking, because we are trying to reverse polarization. And what we find is you can find points of consensus between users that you may not typically agree with on everything, but you may share some close views on some perspectives. And we can use that to map out and find communities of ideologies and determine what are the best articles to recommend people to open new lines of communication between these different communities on the platform. So this is really a visual representation of how we go out and find articles that we think you’ll like, but rather than relying on our ranking, you can go in, you can find articles on here and you can say, I think this is you. Yeah. Zach Elwood can find one of your articles, click on it and I can read it directly from there. So not having to rely on pure recommendations that come from your front page, if that all makes sense.
00:04:14 – Zach Elwood: Yeah. And that’s, uh, and it’s pretty early, obviously in Aemula’s, um, Aemula’s just started, so it’s not there’s not very many things on there. But as it grows, you know, I think you were saying as it grows, you would expect to see some mapping reflecting like the polarization in society where you would, you know, assuming you’ve got a standard sample size of the American population, you probably see a grouping eventually of like these two clusters of, because of the related stances on issues that people on both quote sides have, but it’s too early to see that because it’s just starting out, right? Absolutely, yeah.
00:05:05 – Don Templeman
We have a few hundred users, a few hundred articles, and just over a dozen publications on the platform. Within the Explorer page, I will note that this is just my local community. So these are people that are close to me. Obviously, I interact with the platform a lot, so this does represent a large portion of it. But we also have this separate perspective map, which shows you all of Aemula, so you can see roughly where you fall relative to everyone. And the reason we do that is we don’t want someone to go and just find opposing points of view and try to disagree with them to demote them in the process. We want people to interact and explore their immediate communities and beliefs and articles that we think are likely to support, likely to agree with, and not just go out and try to find competing points of view from the get-go. But we do want to show everyone roughly where they fall. And going back to left versus right thinking, like obviously as we start to grow the platform and it is more representative of everyone’s ideologies, we would expect to see some filtering into left versus right clusters. but we want to avoid having to label things as left or right or keep it that simplistic. So this is why it’s in like this three-dimensional space. And we don’t actually know what on here is left or right, because we as a platform wanna remain verifiably neutral. And one way to do that is say, we actually don’t know what the underlying content of these articles are. We don’t know the ideologies of the underlying users on the platform. All we know is their public address, their account number, and we know the address of the article, and we can map it out just based on everyone’s relationships and how they’ve interacted on the platform so that no one can point to us and say, oh, you’re pushing a specific narrative, you’re platforming specific writers. We can say we actually don’t have any insight into that, and it’s all just generated based on how the community’s operating and interacting.
00:07:01 – Zach Elwood
Right. You’re using transparent algorithms that are value-free in how the handle content it’s just using a transparent constant algorithm is the goal and you’re not getting into yeah that you and i’ve talked about this on the on the last call about this on the last episode and i’ve had episodes about the illusions of the left right spectrum how there can be there’s a lot of a lot of critique that the left right spectrum is an illusion and also a a conflict amplifying illusion because it kind of the the the embedded nature of talking about our political divides as a left-right spectrum can itself be very false and also just get people thinking in these left-right terms. That can help explain why there’s this filtering for everything being part of this monolithic left-right But that’s the great thing about what you’re doing and social network analysis in general, because it’s because it’s value free and label free, right? You’re not getting into, you know, trying to determine what makes something quote left or right or all these kinds of labels. Yeah, exactly.
00:08:19 – Don Templeman
So that’s it’s overly simplistic. So you start to over categorize things in the left versus right. And then people get into the thinking of, oh, I identify on the right or I identify on the left. if you show me something that’s labeled as being from the opposing point of view i’ll automatically discredit it just because i know it comes from the other team and i don’t want to support anything they’re doing but when you actually start to look at underlying beliefs you’ll find that there’s a lot more nuance there and there’s a lot more complexity so some people may disagree on a wide array of things but have a lot of agreement on a certain topic and so what we want to do is be able to make it as easy as possible for those people to start communicating open up new lines of dialogue to be able to understand some of those other perspectives that they may disagree on, just so they can start to have the conversation, start to see information from other people. Because the way that we currently discover information with traditional publications or on social media, you’re normally only finding stuff that’s within your immediate cluster, stuff that the publication thinks you’ll like, or stuff that the algorithms on social media think you’ll like. And so you just get further and further reinforced into your current beliefs. And we want to do the opposite. We want to reverse those forces by mapping everyone out in one holistic space and saying, you can start to discover these new perspectives that are around you in these different communities, just so you can start to get a better sense of the world around you.
00:09:46 – Zach Elwood
Yeah. For people that are curious about the idea that the left-right spectrum is an illusion, I’d say it check out the book, The Myth of Left and Right, and check out maybe an episode that I did where I interviewed the co-author of that book. Yeah. So maybe you could talk a little bit about, have you seen interesting patterns in the clustering so far of Aemula? Do you have any interesting observations about what you’ve seen in the behavior?
00:10:15 – Don Templeman
A lot of our early writers are people like yourself that understand what our mission is. and they’re writing from an inherently centrist depolarizing perspective. So even though we do have some clusters of information here, it is still so early that all of these people are roughly in what we would call the center. But we have seen interesting interactions on the platform just with the recent contest that we ran with our $5,000 essay contest. So as users were coming on and trying to support the writers that were sharing those essays on other platforms, they’re coming in as new users and starting to interact. So you can see like with your article here, a lot of support there and some support where it’s a few readers largely supporting one writer’s piece of work. So we started to see some behavior like that, which we would expect to see more of at scale. But for the time being, I would say it’s too early to start doing some of the more interesting things that we can do with this type of structure and these types of algorithms. because you really do need a lot more data to start more accurately reflecting the ideologies of the population so that we can start doing some more interesting things.
00:11:30 – Zach Elwood
Can you talk a little bit about the concept of like the gravity or how more connected ideas clustered more towards the middle of the graph, that kind of thing?
00:11:43 – Don Templeman
Yeah. So what we want to support in our way of reversing polarization is if you map everyone out, you can see that There’s some clusters here that are more on the fringes and a lot of users here in the middle that are interacting with a wide diversity of authors. So we want to promote authors that can write an article that gets diverse support from multiple different ideological communities in our graph, because that indicates to us that they’re making strong arguments. They’re presenting factual information. People are willing to interact and engage with their content. Whereas someone who may be more on the fringes and getting a lot of traction from some small group of people. If we were just using pure engagement metrics, we would say, Oh, they’re getting a lot of reads. They’re getting a lot of eyes. If we were trying to sell advertisements, that’s a very valuable person. So we would try to promote them more. But the thing is that typically happens when you look at traditional social media with people sharing inflammatory content, that’s from a more radical ideology. And that’s the opposite of what we want to support. So by mapping everything out in this 3d space, we can start to say like, This is the center of our community. This is where we want to start to draw more eyes and more attention. So for people on the fringes, when they join the platform, we have information that is relevant to their beliefs. They’re willing to engage with it. They can start reading and interacting on the platform. And then over time, we can slowly show them articles that are closer and closer to that center. And the way that we’re able to determine what the center is, is every time there is a connection made between a reader and an article, it creates these little edges. You can see if I zoom in all these little connections. And essentially what we do is those have a gravity about them. So if I’m supporting a lot of your articles and making a lot more of those connections, we’ll grow closer and closer together in this 3D space. And we can use that gravity to determine who’s getting a lot of connections from a lot of different perspectives. because that’ll pull them in closer to the center. And if someone isn’t getting a lot of diverse attention, they’ll be drifting off further on the side. So it’s more like natural way of using that gravity force to see who is being pulled into the center, who’s more on the fringe, how can we start to promote articles to people on the fringes to pull them in and which articles would be the most impactful that that person would likely agree with and actually be able to interact with, but will also move them closer to the center. And that’s really the underlying basis of how we drive our recommendation algorithms.
00:14:24 – Zach Elwood
Yeah. And you briefly mentioned this, but this is basically the opposite in terms of how a lot of content recommendation algorithms work on social media and such where, for example, people might have heard about these uh, like on YouTube where you, you express interest in one thing and it, and it shows you something like get you down a rabbit hole of like more extreme and conspiracy minded content because it’s, you know, that’s, that is a, uh, a valid way to get people more engaged, but especially like the ramping up the emotionality of it too. Uh, but yeah, what you’re trying to do is, give people what they want, but also move them a little bit in the, in the opposite direction of going down some like really fringe rabbit hole. Right.
00:15:14 – Don Templeman
Exactly.
00:15:16 – Zach Elwood
And it is not just discovery.
00:15:18 – Don Templeman
We’re also trying to change the incentives of how the content is actually produced because when you’re on a platform like YouTube or on X or really any traditional social media, you’re trying to optimize for the incentives that are at play in those ecosystems. So if what is being rewarded by the algorithm is some clickbait thumbnail headline that gets a lot of inflammatory people arguing in the comments, you’ll start to create content that aligns with that. It’s audience capture.
00:15:47 – Zach Elwood
It’s you’re trying to… Yeah, it’s self-reinforcing cycle. Exactly.
00:15:51 – Don Templeman
And you can’t blame the creators in that context because they’re trying to maximize their earnings on the platform. They’re trying to maximize their views. They’re trying to spread their message as wide as possible. So if that’s what the platforms are incentivizing, that’s where you would expect the content to… That’s where the system leads.
00:16:09 – Zach Elwood
The system naturally leads that way. Yeah.
00:16:12 – Don Templeman
Exactly. And that’s… The deterioration of content quality that you see on a lot of platforms where they’ll just start out and people will speak so highly of like, oh, look at how this platform is creating all of this new content that you can’t find on other platforms. Writers are able to freely express themselves, but then as they grow and those incentives become more prevalent, you start to see deterioration collapsing back towards that. I’m just trying to gamify the algorithm to maximize my exposure. And that’s what we’re seeing now with Substack. where Substack started as like the cultural engine of change, inviting a lot of independent writers who are now free to write and own their own perspectives. And there was a lot of great content on the platform, but as they’ve grown and they’ve implemented traditional social media style algorithms with their notes feature, releasing the Substack app, a lot of these writers are now trying to play the Substack game of how do I get the most subscribers? And that’s leading towards more what they style headlines, people writing about very similar topics that are being promoted well in algorithm. And that’s, if you look at people talking about Substack and their opinions on it, some people are starting to leave because they’re seeing that occur on the platform, but really that’ll occur on any platform, unless you change the underlying incentive structure. So where we are promoting content, that’s high quality, getting diverse acceptance from across our user base. we’ll start to incentivize writers who, if they’re trying to gamify our algorithm, they’ll start writing higher quality content that is more widely appealing to more people. And that’s what gets promoted. That’s what gets more monetization on the platform. So we’ll be able to reverse that trend where if you start to try to gamify our algorithm, it actually increases the quality of content over time.
00:17:58 – Zach Elwood
Yeah. I mean, you’re talking about shifting a paradigm that’s kind of unquestioned and is the dominant. There’s just no, basically nobody else really questioning that the basic paradigm and you’re trying to shift the whole underlying paradigm of incentives. Yeah. And make it infinitely scalable at the same time. Yeah.
00:18:19 – Don Templeman
Yeah. And I would say a lot of people realize that this is happening. A lot of people feel like content quality degradation. A lot of people realize that when the incentives are forcing clickbait style like short form content that’s how content is going to go but when you remove those incentives or change them like we’re trying to do a lot of the creators like this is what they would prefer to be doing if given the freedom to actually be able to create that style of content so it’s more just a factor if you start to switch those incentives you can start to let people write more freely share what they’re actually wanting to create and over time that’ll be able to increase content quality.
00:19:03 – Zach Elwood
Do you want to share anything else about the visuals of the graph? I was going to ask you some kind of like broader questions about social networks, but I don’t know if you wanted to mention anything else you want to highlight there.
00:19:15 – Don Templeman
No, I think it’s, if people are listening and want to check out the platform, like if you go in and you start to interact with algorithms, you can play around with it yourself and always happy to hear feedback from people as they start to interact with it.
00:19:28 – Zach Elwood
No, it is really cool to play around with. And I, I’ll enjoy seeing it grow over time and see what patterns develop. I think that’s one of the interesting things about the, the social network graphing is, is seeing the patterns and how those map over to a societal patterns, you know, and, and how that, how that’ll grow. Yeah. Um, yeah. So yeah, the, and maybe you could talk a little bit about how that social network graph is a implementation that many social networks network platforms use and where that idea comes from. Obviously, you didn’t create that. You’re using it and harnessing the idea. Maybe you could talk a little bit about the social network analysis idea in general.
00:20:12 – Don Templeman
Yeah, I think it goes back to really the start of the internet and the start of the web in general. I believe it was Tim Berners-Lee when creating the initial form of the internet in the form of the World Wide Web, talking about the semantic web and how You could have context from how websites and servers are all interconnected. And that idea was really built on through networking in the early stages of the internet. And I think popularized by early social media platforms like Facebook as they started to grow. But it really is just like an intuitive way to think about relationships of people and content online. So it is just saying like, if I… post something on Facebook and you like it, then there’s a connection that we have made where you like to post that I’ve made. And it’s just a very intuitive way to start thinking about networks. But the other reason they’re used so prevalently, especially on the internet, is because it creates these social graphs, which are a whole field of mathematics with graph theory and information theory. So it makes them easy to study. And what it allows you to do is start to gain insights on user behavior and how information is flowing through networks, purely from just interactions on the platform, which is really why we’re starting to leverage and use it. But it is used widely across a whole array of different use cases. So obviously in social networks, we’re using it as a recommendation algorithm, which like Netflix uses their recommendation algorithm based on a social graph like this, just interactions of content you’re consuming and what they think you’d like. But it also can go into kind of like wider fields where it’s fraud detection with banks. They use similar technology to determine fraud, epidemiology and contact tracing, which I think a lot of people became familiar with during COVID uses similar technology. Also Google maps and Uber and Finding direct routes places use similar technology, but all different sources of data. So it is widely used. They’ve become pretty efficient. And all we’re doing is using that same technology, but changing where we’re implementing it and the incentives that we’re putting behind it to try to create something that hasn’t been done before. I think you’re muted.
00:22:51 – Zach Elwood
Yeah, yeah, sorry. I’m muted because of these damn sirens. I’ll cut this out, obviously. Do you have any idea what direction we should go in next?
00:23:02 – Don Templeman
I liked those decentralized, centralized graph diagrams you had. I actually have a little whiteboard thing on my computer where it had a similar… image. So if you want to talk about that and like how networks are formed, I can talk about like information theory. I can share my screen again.
00:23:28 – Zach Elwood
Sure. Do you want to just keep talking about it or do you want me to queue it up with a specific question, you think? Because if you had an idea, like what would I ask to queue that up, you think?
00:23:42 – Don Templeman
If you say like, oh, I was just pulling up some like images of network analysis or something, I can key into it from there.
00:23:50 – Zach Elwood
I think. Um, actually, why don’t you just, uh, why don’t you just start talking and I’ll, and like creating a, like starting a new topic. And I’ll, I think, I think it’ll be a seamless edit if you’re just like, I want to show you these, you know, things on my computer. I think that’ll work.
00:24:07 – Don Templeman
Yeah.
00:24:08 – Zach Elwood
Yeah.
00:24:10 – Don Templeman
Okay. So I saw that you pulled up some images there of network analysis and, um, I think that segues nicely into some of the concepts that we’re working with and some research that we’re doing at Angular that we’re trying to publish on how you can structure information networks and how that can actually make them more resistant to polarization and misinformation to create higher quality information environments. I actually have very similar graphs that I have on like a whiteboard on my computer.
00:24:41 – Zach Elwood
Oh yeah, let’s see that, yeah.
00:24:43 – Don Templeman
If I share my screen. So this is similar to what you just pulled up on different ways that you can structure information networks. And the current way that most social media platforms and actually how we naturally coordinate as people in societies is what’s called preferential attachment. So there are people that have significant influence that a lot of people follow. And a lot of clusters form where people different people who follow these like influencers or power users may not necessarily communicate with people that are in or power users or followers of another influencer. And through this preferential attachment, it actually creates the most complex information network possible. So while it is natural and easy to form, it’s actually one of the worst ways that you can form an information network if you’re trying to promote high quality content and everyone having access to the most information as possible. So this is like, if you were to think about it, it could be like traditional publications where you subscribe to a specific newspaper and like this newspaper has some subscribers, this newspaper has some subscribers, or it can be like on Substack where a large writer comes on board and they have their own subscribers that follow them, but it’s all relatively just jointed. And the reason that it is so complex is because there’s, multiple centers of influence that all are able to influence their own followers, but there’s not communication across those followers. So there’s two different ways that you can try to structure an information network that are more stable. And one of them is fully centralized. So this is like, if you think of how news was shared early on in the development of newspapers at the start, but also kind of like more prevalently as propaganda, where there is one information center and it distributes information to all of the users or all the people in a community. So while this is very simplistic and it is stable, there is one consensus truth that everyone is agreeing on. Everyone’s working on the same information.
00:26:50 – Zach Elwood
Like having just a few broadcast networks, you know, up until the 80s, you know, between the, yeah, like 19, you know, 1980s or something like that, like this monolithic… Distribution, yeah.
00:27:05 – Don Templeman
Yeah. And that’s why back then there was, I trust in the news. Everyone felt like they could operate and have communication with people of different ideologies because you were all working off of the same information source.
00:27:18 – Zach Elwood
Mostly. Much more than we are now. Yeah.
00:27:20 – Don Templeman
Much more than we are now. But obviously there’s problems with centralization where there’s really only one point of view and there’s a lot of control over that point of view. So there’s a lot of incentive to skew that into… trying to use it for securing power for whoever is the person that is sharing that information. So this doesn’t necessarily result in the highest quality content or people having the most access to the widest amount of information, but it is much more stable for people being able to agree with each other. The other option is fully decentralized where everyone can communicate with everyone individually. And this is what has only recently become possible to do at scale because With a subscription news service, really news needs to rely on subscription. So there is stable revenue. So you have the ability to go out and do longer form investigative reporting processes. It is an expensive process to do high quality reporting. So you need to rely on subscription revenue, but you can’t really do that. You previously or prior to 2024, you couldn’t really do that just due to technical limitations, because as a subscriber, you need to just pay revenue. one subscription and then be in the network and everyone can work and operate seamlessly together. What we saw with subscription models in the past was closer to this preferential attachment where you’re subscribing to one publication, you’re subscribing to one writer, and that’s really like where you get your information from. When you decentralize the network, information flows more freely and people are able to communicate across different, there’s different paths for information to take across the network. And it is, more stable and resilient to people trying to put influence into the network. Whereas with preferential attachment, there’s really only a few power players that are really controlling the narrative. That can’t really happen in a decentralized information environment, which is why we’re building Amul with that type of technology is because we want to create an as open as possible of an information environment for people to communicate freely. And it creates the shortest path of information directly from the source to the reader. So there’s as little outside influence as possible over the information you’re seeing. And you can individually trust that the person is a credible source, which we can get into reputations and how you determine credibility. But that’s really the core of what made this technology possible now is we have that ability to work in this trustless system, but everyone can still trust that the quality is there. You don’t have to rely on trusted intermediaries like publications in the past so we can avoid this preferential attachment problem.
00:30:04 – Zach Elwood
And am I understanding correctly when you say it’s something that is only recently possible, that’s because of the blockchain technology and the ability to do these smart contracts where you set something up to operate and it enforces those rules? Am I understanding that correctly?
00:30:23 – Don Templeman
Exactly. So in the past, decentralized information networks were more akin to villages or people where your immediate community, everyone is able to communicate with each other freely. As we started to grow societies to larger scales, you really had to figure out a way to be able to communicate across long distances or with people that you’re not personally acquainted with. Because the process of news inherently is hearing information from a stranger. So something that you didn’t directly experience, you’re hearing it from someone who did. And once you kind of grow out of like 100 to 200 people in your immediate community, you’re really having to figure out like, how can I trust that this information is accurate and true? And How do I know if I want to incorporate that into my worldview? Which is why we moved into preferential attachment where these publications are saying, we are trusted intermediaries. We have a track record of reporting quality journalism. You can subscribe to us and you can trust that even though it’s coming from strangers that you don’t know, we’re vetting it and making sure it’s all credible information. Since the 1970s, those institutions have started to lose trust. And there’s a lot of reasons that go into why, but if we wanna try to rebuild that trust, we really need to go back to that decentralized architecture where everyone’s communicating freely. But if you want to do that at societal scale, it comes down to the problem of how can I trust a stranger? How can I trust that the information they’re sharing is credible? So on social media, we’ve had the ability to communicate as decentralized as possible by being able to communicate with anyone online. We just didn’t know if they’re a bot account, if they’re from some foreign actor. These are all things that have happened and influenced our news cycles in the past. And that’s one of the core issues with misinformation and finding news on social media is there is all of that inherent mistrust where you can’t really know what someone is sharing or if it’s true, which is why social networks also fall into preferential attachment where When you first join a platform, it gives you like, here are some accounts you should follow and you follow them. And then those become the centers of influence. Those are the people that you’re largely filtering a lot of your content through, but you know that like, I trust them. They’re strongly followed account. A lot of people agree with what they’re saying. So I’ll use that as my trusted source of information. But if we want people to be able to operate at scale in a decentralized manner, they really need to be able to trust individually this person has a reputation and has credibility. And now that we have blockchain technology where we can tie reputations to individual people, we can do proof of personhood on chain. We can say I’m verified. I’m not a bot. I’m not a foreign actor. I’m a real person. I have my credibility and track record. We can store that all immutably on chain. So it’s given us all of these tools to allow these trustless systems where I don’t necessarily need to know you, I don’t need to know who follows you, but I know that you have a reputation on the platform. So I know that I can trust that what you’re saying is credible.
00:33:38 – Zach Elwood
Yeah, maybe we can talk a little bit more about judging reliability and such in a bit, but I was thinking when you were showing the middle diagram about how people affect other people. It made me think of one of my favorite talks for the podcast was with Michael Macy, a researcher who’s done some really good work on polarization-related topics. One of his studies was about what he calls opinion cascades, which is how major opinion makers shift the perspectives of other people. He studied how we tend to think, getting back to the myth of the left and right divide where we to put all these different stances on issues into this like spectrum of left and right and leading to this illusory clustering of you know of labeling uh stances and such he he was showing how the the opinion cascades research research showed how influential opinion makers like say for example trump like how trump would uh react to a new issue like covert for example would greatly influence our resulting polarization, right? Like, so we, we tend to, but we tend to confabulate reasons after the fact for why this stance on an issue is related to left or related to right or liberal or conservative or such. But so much of it is actually due to these chance outcomes of like, which way is an influential person in this, you know, in one side or another going to go on, on a new issue. And then, that the opinion cascades kind of follow after that. Right. Um, and there’s just, you know, a lot of chaos in the system too, but yeah, getting, getting to the idea that, uh, what you’re trying to do is basically trying to combat the, the emotion and kind of like team-based reasoning that results from like the usual ways of social are, are, are instinctual operations of how we interact with other people and how our emotions and, team-based affiliations can guide our judgments. And basically you’re trying to create this system that is pushing against that and try to make a more reasonable, less biased, less team-based, less emotional outcome, I think. Yeah.
00:35:59 – Don Templeman
Yeah, exactly. And opinion cascades is a good way to put it. There’s a lot of interesting research around it, but when you have that preferential attachment, it only takes a few steps and a few people to strongly influence large groups. So the way that people describe systems that are structured in this way with preferential attachment is that they’re in a state of criticality where they can very quickly change perspectives of a large portion of the graph or a large portion of the network. And that’s like one of the interesting examples of it is if you think of like the six degrees of freedom thing, if you’ve heard it where it’s like, you’re only six connections separated from anyone else on earth. And that’s because there are these strong, like centers of influence or people who know a lot of other people and have a lot of influence over those people. And so that’s research that’s been done that like that is how we have structured our society where we are only a few steps away from large portions of the population. And that in an information environment makes it very difficult to find stability because one or a few people’s opinion shift can start to shift and influence large portions of that network. So more resilient ways to structure it, where it’s decentralized, where you’re communicating more closely and more frequently with people that are around you, but everyone is able to freely move and shift their own opinions and their own opinions have kind of more weight and the overall like emergent traits of the entire system. Because if you look at preferential attachment, a lot of the like collective ideology of the network is influenced by the opinions of those few small centers of influence. If you look at a centralized network, the main like perspectives of the entire network align with whoever that centralized point of like centralized news sources, but with a decentralized system, the overall perspective of the network more accurately reflects everyone’s individual beliefs because it is this average consensus of how everyone is interacting and kind of perceiving each other and understanding the world around them and coming up with their own views all independently rather than being influenced by these large opinion cascades.
00:38:18 – Zach Elwood
Do you want to talk about the AI aspect of this work? You and I have talked a little bit about how this plays into LLM AI tools using content. Do you want to talk a little bit about that or would you rather talk about the… you know, how to judge reliability and accuracy using this kind of model, which would you rather, the direction would you rather go on? We can go on both.
00:38:50 – Don Templeman
I’m happy to talk about both of them. Yeah, we can talk about the AI a little bit, because that might be interesting to people, yeah. Yeah, so with AI, obviously, if we’re focused on trying to create resilient information networks and to determine better ways for people to discover news, a big portion and like a new player in that space is AI and LLMs and specifically people discovering information through chatting with LLMs. That’s a growing portion of how people discover new information. And we’ve seen that with like Google search usage has gone down and more people are using open AI or perplexity or cloud or any of these tools to chat, to discover information. So in trying to think of how we play into that ecosystem, A lot of new sources are trying to leverage AI as their ability to find and discover information. But we want to take a different point of view because we do believe that you still need to rely on real people doing the hard work of reporting and discovering new information to actually make sure that LLMs have accurate up-to-date information because since the training cutoff for a lot of LLMs, if they want to have some opinion or provide information real-time relevant events, they need to go out to some third-party source to be able to pull in the information, cite it, and use it in their response to the user. So if more people are discovering information that way, we want to make that process as robust as possible. But currently, when an LLM is asked a question that needs to go out to some third-party source, 40% of the time it cites Reddit, 20% of the time it cites Wikipedia, And the rest of the time it’s trying to cite stuff that it’s able to find online because they really need large data sets. And the only places those exist are really in Reddit and Wikipedia. And while Wikipedia as a source over time becomes more and more credible for real time news on current breaking events, what LLM companies have found is they really need to rely on professional newsrooms. So we’ve seen this trend of, It’s just under $3 billion of spend that’s been committed to licensing content from professional newsrooms to LLM AI research labs for them to be able to license and access content that’s produced out of a professional newsroom. So that’s deals between like OpenAI and the New York Times or Wall Street Journal, Amazon and New York Times. There’s all of these massive deals where they’re trying to get access to this high quality information because That’s really the differentiator between these models and how people choose to use them is which one can give me the best information. The problem is if you’re relying on traditional publications as your source of news, you’re still falling into all of the traps that we’re trying to solve with polarization in media, distrust in media, all of the reasons why how those companies are structured. results in audience capture and them including their own biases.
00:41:56 – Zach Elwood
Bubbles of thinking, biases, yeah.
00:41:59 – Don Templeman
Exactly. So what we’re able to do is we can take all of the benefits that we’re creating for our information environments and make them accessible to AIs if they need to go out and reference some real-time event that’s currently being reported on. And the reason that we’re able to do it so easily is one, Since we are decentralized, an LLM company can come in and make a licensing agreement at the protocol level. So they don’t have to go out and try to find all of the independent individual writers and make individual agreements with all of them. They can just say, we want access to all of Aemula’s content. And then each individual writer on our platform can determine if they want to license their content or not. So everyone still independently owns all of their work. We have record of everyone’s ownership. LLMs, if they want to come in and cite something that one of the writers on the platform has published, that writer can determine if they want to license it. And then that writer gets paid when their information is accessed. Because a lot of the time, currently when writers publish independently, if it’s through their own site, if it’s on Substack, that can still get parsed by an LLM and cited and used in their responses. But that writer never sees any value that came from the use of their work in an LLM response. So we want to make sure that everyone is always paid for the work that they produce. And through our protocol, we can say, if you elect not to license your content, we can protect it so it’s actually not discoverable by LLMs, so it can’t accidentally be licensed. But if you do want to license it, you’ll get paid every time an AI actually accesses it. So it’s a lot more robust for the independent writers. it’s a lot more efficient for the LLM providers because they don’t have to make all these bespoke deals across newsrooms. It’s just one ecosystem that they can plug into all of our information’s in a standardized format. It’s easy to parse and it’s actually stored and structured and how LLMs think. So if you think back to the explore map that we showed of all the articles and 3d space without getting into too much detail, When LLMs, when you’re prompting them and they’re trying to generate a response, they are relating words and trying to figure out what to respond with based on how closely words relate in three-dimensional space, it’s called vector space. And that’s a whole separate category, but you can kind of abstract that away and have a conceptual vector space where an LLM can go in and say, I want to answer something on this topic. It can find that topic. within our information map and determine what is the best article here. It can plug into our credibility ratings, rank what it wants to respond with, license that content directly from the writer, and use it and cite it in its answer to an end user. So it makes the whole LLM information discovery process significantly more robust for the end users that are discovering information through chatting with LLMs. And it’s a lot better for the writers because they actually get paid when their content’s used.
00:45:08 – Zach Elwood
Yeah, I was thinking, I mean, getting back to the idea of how you can create algorithmic approaches to judge accuracy and reliability of news sources and such, it seems like there’s a lot of value in coming up with some approach that uses the social network analysis to say like, oh, this source, this person is creating content that appeals to a lot of different clusters of thought and that a lot of people across a lot of different clusters of thought, appreciate. And it seems like, I mean, A, that’s valuable for people in general, but it also seems especially valuable for these AI, LLM agents that are trying to find non-controversial and agreed-upon information. It seems like that’s a way to theoretically do that that doesn’t involve humans doing fact-checking, which leads to various biases, too. I mean, it’s still going to be hard no matter what, but it seems like that’s… using this kind of algorithmic objective approach in some way leads to some really good outcomes of like these statements and these works appeal to a broad range of people.
00:46:24 – Don Templeman
Yeah. And that’s when you’re dealing with AI and LLMs, like the scale of the data really matters. Like you need a lot of information for them to come up with good answers. And so when you’re dealing with that type of scale, you, have to rely on algorithms at some point. Like you can’t have some massive army of fact checkers going through and trying to check the credibility of all the sources.
00:46:49 – Zach Elwood
Right. Yeah. It’s way too much time. Yeah.
00:46:52 – Don Templeman
So we need that scalable process for determined credibility. And we’re able to do that through that social network analysis where we can say, if a post is getting a lot of diverse support from people with different ideologies, we know that that is likely a high quality source of information. On top of that, We have newsroom tools for writers so they can go through a peer editorial process if they want more people to offer feedback on a piece that they’re about to publish. And we can say that if they’ve gone through that process, it’s also likely higher quality. We can give them access to research and analysis tools, data sources, tip networks, credentialing, like all of these tools that they can use. And as they implement them into their reporting, we can increase that quality metric of what we perceive that quality to be. And then we also have individual reputations for readers and writers. So if a writer is getting a lot of support from users with high reputations, then we know that they also likely have a high reputation and we can build their reputation into those quality scores. And looking at everything holistically, you can start to come up with credibility rankings for not only authors, but also individual articles and use that to allow LLM responses to easily discover what is likely the highest quality source that I can find for this specific topic, but also it can allow it to start to adjust its responses based on who’s asking the question. So if I have an annual account, it knows roughly where I fall relative to some of the sources it’s trying to find for me. It can give me a source that is closer to my beliefs that I’m more likely to agree with. rather than giving me some source that may be from an opposing point of view, where I’ll ask the question, immediately discredit it and say, like, I disagree with this take, like there’s bias in how the LLM was coded, there’s bias in the training data and then discredit it and either prompt my way into getting it to say the answers I want it to say, or going out and trying to find a different source of information to support my point of view. Like really, if we’re trying to optimize for providing the best answers to the users, there’s not one answer that is best for all users. You can start to gear it so it’s the best answer for that specific user to better understand what concept they’re trying to understand.
00:49:15 – Zach Elwood
Yeah, because at the end of the day, you have to worry about customer satisfaction and so do the AI. You and everybody, it’s like no matter what our wishes and goals are about how these things work. Like you have to give people what they want at the end of the day. And you, you can’t give them like, you know, if somebody’s using Grok and they’re like, Elon Musk is a, Grok’s telling you Elon Musk is a genius and all these recent things where it was praising Elon Musk with these weird responses. It’s like, if you’re doing too many weird things that don’t appeal to your customer base, they won’t want to use your product. Right. So you, you do want to give them what they want while also, you know, aiming for, accuracy and responsible implementations and stuff. But yeah, you want to give people what they want. Yeah. Exactly.
00:50:03 – Don Templeman
And then that becomes a fragmented environment in and of itself, where if I like what Grok is telling me about Elon, then I’d start using Grok more. And then Grok is my source of information. And that likely differs pretty drastically from someone who’s using Anthropics Cloud or or chat GPT, so people start to work out a different information environments and you would expect as they get access to wider and wider information sources, hopefully the LLMs kind of converge on some general consensus where they all have similar answers, but there is a wide divergence currently on the types of answers that they give. And so if you only use the ones that you like, it goes back into that same problem.
00:50:44 – Zach Elwood
Polarization cycle.
00:50:45 – Don Templeman
Yeah. Yeah. Everyone’s only going to work in the information environments that they want to engage with.
00:50:49 – Zach Elwood
And that’s kind of how the polarization works. It creates these two spheres of like, there’s different schools, there’s different kinds of companies, there’s different, you know, circles of various types of, you know, there’s different churches, there’s different, you know, so yeah, it’s like, yeah, you’re trying to break out of this entire paradigm and create entirely new paradigm and incentives, which is awesome. That’s why I’m so excited about your project. And I just haven’t seen anyone else doing stuff that I think is really trying to break these fundamental paradigms in a way that you are. So I think that’s great. Anything else you want to talk about? Because I think we’ve covered a good amount of stuff. Do you want to throw in anything else interesting before we go?
00:51:36 – Don Templeman
No, I mean, I appreciate the support of our mission and what we’re trying to do uh obviously a long way to go it relies on scale and we have a cold start problem where we need content and readers and really these things always start to work once you have large scale right so we do have a long way to go it’s a challenge work in concept but uh kind of going back to like giving people what they want like we don’t want to try to act against human nature. We want to be able to make it as natural of a process as possible to happen. So that’s why we’re so focused on being fully open, fully transparent, everyone operating independently, owning all of their own work, everyone communicating independently. Those mechanisms only work if people are actively involved in them. So that’s why we want to use human readable algorithms for all of our algorithms so that people can actually go in, read them, understand what they’re doing and start to have a say in the process because it is all community governed. People can vote on how they want to see things change. So whereas some platforms like X have open sourced their algorithms, a large portion of it is through AI where it’s this black box where no one understands it and it takes real technical expertise to go in and understand how it’s operating. So like that’s not a community governed process if no one’s able to actually understand it. So that just goes to show like, we need people actively on the platform, participating in it, helping kind of go through those iteration cycles to make everything better and start to actually align with our missions. But overall, like really excited to get people on the platform, start to hear their feedback, start to see how we can improve, but hope a lot of these ideas resonate with people and obviously always willing to share more information, help answer more questions on anything that may not have been clear.
00:53:39 – Zach Elwood
Yeah. How, how can people, if people are listening to this and they’re excited and, or even just interested, how can they support you? What are the different ways, like from a regular person, you know, a non influential person versus like, say somebody wants to invest a bunch of money from that scale. Yeah.
00:53:58 – Don Templeman
So yeah, The easiest way is just going to amyla.com, creating an account. Like I said, it’s a pretty lightweight process. We’re not going to start emailing you a bunch of marketing materials. It’s just to prove that you own an email address because you do own the account. It’s able to hold money for you. So like you need to have some recovery mechanism, but we actually don’t have any ownership over that. So join the platform, start to mess around. You get a free trial. So it doesn’t cost anything. You don’t have to put your card in or anything. But if you do like it and you’re enjoying what you’re reading, start a subscription. Any subscription goes a long way at this point as we’re starting to build up our subscription pool so it’s better and more attractive economics for the writers on the platform so we can actually reward them for all of the great work that they’re doing. If you are writing, we need content. We want to be able to support your work. So you can publish directly on the Aemula app or you can go – link your Substack if you’re already writing on Substack, and that’ll automatically cross post anything that you’re publishing on Substack anytime that you publish. You retain full ownership of your work. You earn from our paid subscribers at the end of every month. So there really is no downside. You can stop at any time if you don’t want to do it. And then from there, just following us, providing feedback, starting to interact and be active in our process of iterating and improving the platform. but really just creating an account and going in and starting to play around with things is the best way to get involved.
00:55:32 – Zach Elwood
Awesome. Okay. This has been great, Don. I’ll let you go and thanks for joining me and best of luck on the project.
00:55:39 – Don Templeman
Yeah. Thank you so much.