Will AI Take Over the Government?
https://www.youtube.com/watch?v=7e_88JOrf3U
Transcript:
(00:00) the dominant narrative that everyone talks about is as if AI is a being it's like where is that being it's a it's a bunch of servers it's a bunch of matrix multiplication why do you feel such a need to view AI as something that is going to be your leader or something that is going to rule you or something that is going to be separate from you and control you why do you not see it as a reflection of yourself it's never just a mechanism for summing these disperate beliefs into a singular place where it can try to do some type of mediation
(00:44) between the differences start with this Foundation of generative models they are basically modeling Collective belief or whatever goes into the model we can take a second step which is where we add these citizen or source embeddings which ensures that each individual voice only has a fixed siiz box um to speak through and they can speak through it as much as they want but they're never going to have more space than anybody else so that keeps a sort of uh evenly weighted sort of representation of collective belief and then we can introduce into
(01:20) that model belief over time and sort of the expectations about what certain actions being taken at scale uh will do to the world and then what actually happens and we can then upw certain voices more than others hi U my name is John Ash I am an AI engineer I've worked uh primarily with natural language models for about a decade now I used to work at a large fintech company where my job was mostly focused on Bank transactions there's a little string of text that's associated with each transaction that describes what the
(02:05) transaction was but that string is pretty noisy and inconsistent so my job was to make sense of the meaning of transactions and what that money was essentially used for in a broader context of of the account and I was recently on YouTube and I saw a video from David Shapiro about AGI for governance and I clicked on it right away because AI in conjunction with human decision making is something that I've been interested in for a very long time and so I thought I might see in that content at least some of the ideas
(02:56) that I had come across in this time unfortunately not once uh in this almost hour of talking did he make the connection that AI generative AI specifically itself is a democratic mechanism he only ever framed it as if it was some sort of leader or integrated being or representative itself something that was meant to supplant the role of humans directly and not that it was a higher resolution form of voting right which kind of astounded me like I thought that if you had this much experience talking about AI exploring
(03:50) all of these new technologies that it would not be that hard for you to make the connection or the analogy that when you're training a generative model that you're capturing a a distribution of collective belief and I understand kind of why most people do not make that connection most of us have our exposure to what AI is through movies and stories and because those aren't really based off of any understanding of machine learning itself it's usually anth morphed like we see signal in the noise there's some sort of
(04:36) apophenia and we see the pattern of ourselves in it rather than seeing it being this representation of collective intelligence all some together but to me when you look at any generative model if you go back to something like a generative adversarial Network that's trying to just model faces that hasn't taken all of the art into it it's never going to Output anything from that that isn't a face it it's by definition like a pooling mechanism that tries to create a representation of the input where um you can whatever you output
(05:25) looks like whatever you input and so to me that very clearly is a type of polling it's a a high resolution form of sampling from a data set where the output is sort of like a summary of the input and I kind of see this in the global Narrative of what AI is to the point that it kind of confuses me that there has been no breakthrough on this narrative that AI doesn't need to be an integrated agent it doesn't need to be a being it instead can be a reflection of wider beliefs about complex issues and what we would
(06:21) like to see uh happen in the future or happen in the present and what has happened in the past relative to our you know past voted beliefs I mean I say voted in quotes because to me it's relatively clear that what AI enables is that you are no longer bound to the concept of voting even people like Ilia from open AI who is like one of the top level of people when he talks about governance when he talks about you know he he has this idea that a AGI would be some sort of CEO and that humanity is the board members the ideal world would like to
(07:07) imagine is one where Humanity are likee the board members of a company where the AGI is the CEO the picture which I would imagine is you have some kind of different entities different countries or cities and the people that leave there vote for what the AGI that represents them should do and an AI that represents them goes and does it you could have multiple a you would have an AI for a city for a country and they would be trying to in effect take the Democratic process to the next level he's still talking about votes right and
(07:40) I just don't understand why you would not think about this in the context of everybody just writes what they believe in they write what they think should happen what they think has happened what they think is happening and you keep record of that and the model is continuously trying to model that Collective belief okay in the David Shapiro video there's a brief section which maybe I'll play next maybe not that talks about the need for it to somehow collect the will of the people the first thing that we need to do is
(08:25) use AI whether it's a combination of existing you know Machin and and databases and stuff combined with large language models combined with deep multimodal models um we need to survey the will of the people whatever else is true the one of the chief purposes of government is to reflect the will of the people now right now the primary way we have to do that is through voting that is the primary mechanism that we have to express our will where it's you know every so often we get to go pick a person who who we feel represents our
(09:01) desires and beliefs and then we go and hire them via election to go represent our desires our values our beliefs in the process of government but if you remove all humans then it's not necessarily electing a representative you have to express your will um some other way now that doesn't mean that you don't you get rid of voting maybe you have more voting but this goes back to Socrates complaint which uh and I know Elon Musk is one of the people who said oh direct democracy makes sense it absolutely doesn't you do not want to
(09:33) live in a direct democracy because then you have tyranny of the majority um and that can get real bad real fast um however by combining uh you know existing online platforms leveraging new technologies using large language models and semantic clustering and all sorts of stuff I suspect that we could easily build a platform a democratic platform that very very uh compreh apprehensively collects and understands the will of the people um and then what do you do with that information we can figure that out later but rather than just picking a
(10:09) person because also remember go back to the slide that I did about fundamental human flaws power seeking avice uh plain old ignorance um you get rid of those if you get rid of people now you introduce new problems if you replace people with machines but the point is is that if we can directly express our willpower through some mechanism um you know maybe with a combination of blockchain voting and artificial intelligence and a few other things then the then the machine apparatus of government will at least be aware of
(10:41) what we the people want so that's step one we need something that allows us to express our will and allows it to be recorded and accurately measured um so that then the AI government can then use that and even in that moment he does not make this connection that the training data itself already is doing that it's already collecting the will of the people with all of its warts and all of its niceties with all of the human Shadow and all of the human light and the problem that we run into with the basic model of Transformers where we're
(11:19) just predicting the next token is because there is just so much in there and there's so much conflict in human belief that it doesn't really consistently behave the way that we want it to so we need a process of steering the outputs to be a little bit more focused and a little bit more weighted towards some representation that is deemed as good right we need to have something like reinforcement learning with human feedback which is training a reward model where you have people um rank or vote upon a number of responses to a
(12:02) particular promps prompt and say which one is is the best and then you create this model which can be rapidly trained against um the original model um to update the parameters to try to create uh outputs that consistently align with what is being modeled as the right answer by this reward model and in that process there inherently is a waiting or an upweighting of particular voices in that process and so perhaps maybe it is in that zone that other AI Engineers are not making this connection that what it is is a
(12:50) democratic mechanism because they're stepping away from the idea of it being a perfect representation but I don't think that if we want a functional form of hyper democracy or a functional form of governance that involves AI that we want a perfect representation we can start with that and in fact I think that maybe if you're watching this you're wondering well how do you ensure that the sort of equivalence of votes if different people can just say more things and be overweighted in in the model be like if it's predicting the next token and if
(13:28) one particular voice is 50% of the tokens it's going to lean towards that voice more and I don't particularly think that's a hard problem to solve there is something in language models called word embeddings and for each and every word they are allocated the same uh amount of weights to model their meaning right no particular word is given more space to store their meaning right and on the same Tac if you were to have something like a citizen embedding where each individual voice is tied to a particular uh identifier then they could speak as much
(14:14) as they want into that space it would be like everybody has a box to speak through and they could speak as much as they want and they could get as high resolution um copy of their of their beliefs in as much Nuance as possible but if they don't want to they can just say as little as they want to and the gaps will be filled in by the other voices who basically say very very similar things so you could essentially have a a democratic mechanism that would allow for people to speak their Truth at any particular time in any particular
(14:54) way and the only thing that the model is really doing is representing that belief I think this is and I I've thought this for a very long time that this is the incredibly obvious future of collective decisionmaking but at every turn that I look none of you seem to get it like I I'm a little confused why you can't understand it it's not that complex but the dominant narrative that everyone talks about is as if AI is a being as if it's this one thing it's like where is that being it's a it's a bunch of servers it's a bunch
(15:46) of matrix multiplication and it's not doing much more than trying to predict the next token and it's it's never really really when you're working with smaller models and you're only inputting one particular type of data stepping outside of that bounds you know if it's a generative model like I said trained on faces it's only going to Output faces I think the problem is the scale of human knowledge is just so large and it's so difficult for us to Fathom it that we have trouble realizing that yeah anything that you prob that
(16:28) you see out of the model is somewhat represented in the training set like I heard um ACA Rasin and Tristan Harris on Joe Rogan show giving a example of it doing something that it wasn't explicitly trained to do and that was like doing chemistry I'm like but people talk about chemistry online there that is in the training set right there is always something thing in the training set about or related to what is output we have yet to see uh these models for example output brand new languages that are completely consistent I mean sure
(17:16) maybe you could give it a prompt that says you know make a new language but it probably only would be as complex as a simple Cipher where individual words map to other representations of those words it wouldn't invent its own grammar it wouldn't invent its own syntax and I just want to ask all of you if you get this and what you're thinking about right now why do you feel such a need to view AI as something that is going to be your leader or something that is going to rule you or something that is going to be separate from you and control you why
(18:04) do you not see it as a reflection of yourself I know that there is no process that open AI or these other companies are affording you to basically put your own words into that training set they're limiting Your Capacity to create GPT models through something like RG where there's some sort of vector-based document search to introduce your meaning into the context window whenever it gives you response and not very many people are doing the sort of uh fine-tuned models that has sort of a a Lowa Matrix where your additional
(18:48) meaning is stored in a subset or smaller subset of parameters not a lot of people are doing that but it exists I mean and it works very well and it can capture new meaning that is outside of the current Overton window of belief it's it's kind of hard because it needs to have a sort of place to plug in right it's like if I if my knowledge or my belief set is like this Lego it can kind of have a hard time fitting into that larger built out representation unless I explicitly make that bridge and that's one of the
(19:34) reasons why I'm very clear that it doesn't model anything outside of its representation just how much I have to repeat and draw the lines very clearly to my representation or understanding of the world for me to get it to do things or talk about the ideas that I work with which you know many people who watch my channel know that I'm a proponent of a concept called cognism which is basically this which is using um AI to model Collective belief and there's sort of a attention Market within that whereby the model is learning over time
(20:19) which voices to attend to more in certain context such that when it outputs useful information to people it's only citing um and it's only sourcing from the people who have a clear history of getting it right I mean I think you could think about this as like a doctor for example if a doctor is consistently making uh predictions about their patients they're making diagnoses and basically saying if you take this medication if you do this treatment you will get better well I don't want to listen to a doctor who consistently gets
(20:57) it wrong right I I want there to be some sort of distributed waiting where the Doctor Who is always getting it right is sampled more right and in governance I I don't want people who write legislation that doesn't lead to the intended out uh intended outcome to be uplifted again I I don't think they should write legislation if they continue to write legislation that doesn't achieve what it sets out to achieve instead you want to say okay this person made this set of guidelines and they worked okay they seem to get something about the world
(21:37) and how it functions and so therefore we should upweight that person and that's sort of going beyond the notion of democratic mechanism because a democratic mechanism is supposed to be this evenly weighted U mechanism where you only have one vot vote and everybody is supposed to only have one vote a market dynamic functions such that you can earn more voting Power by earning more money or capital and when you have more access to resources you have a greater impact upon the market and how it functions so what
(22:13) I'm saying is that we can start with this Foundation of all right generative models they are basically modeling Collective belief or whatever goes into the model we can take a second step which is where we add the citizen or source embeddings which Pro ensures that each individual voice only has a fixed siiz box um to speak through and they can speak through it as much as they want but they're never going to have more space than anybody else so that keeps a sort of uh evenly weighted sort of representation of collective belief
(22:48) and then we can introduce into that model belief over time and sort of the expectations about what certain actions being taken at scale uh will do to the world and then what actually happens and we can then upweight certain voices more than others and all of those parts together form what I call cognism the model that does this is called um an iris um we talk about taking those learned representations and those staked beliefs that are guiding the collective decision-making process and writing them to a blockchain like that has already
(23:37) had a a large amount of work done in a different context to sort of secure uh that which we don't want to see modified and this is a similar sort of case where if what you're saying in the past affects your your future influence on the decision-making process obviously people would want to retroactively change what what they had said in the past so you need to have some place that you can SAR it securely and we call that the semantic Ledger but all of these terms are just me trying to give boxes or pointers so that you can make sense
(24:15) of it right that that you can see the parts and you can see how they fit together but I'm kind of struggling in this sense that if there is a person out there who is making hours and hours and hours and hours and hours of AI content and they're talking about AI for governance for an entire hour and not a single time they make the connection that AI itself generative AI specifically is a democratic process it's a democratic mechanism that is only ever representing the things that go in and I feel confident enough to say that
(24:53) and I I think you should also feel confident enough to say that because if you you simply take out any from the training data about physics or take out anything about a particular book right it will never talk about that it's never going to spontaneously recreate a book that hasn't yet been written like something that has yet gone into that training set it has to be in the training set for it to talk about it at all and maybe this is going too far to the edge of what you want to hear about but even things like fund search which I
(25:31) think Deep Mind released recently which was claiming that it was M that there was an AI model that was making new discoveries it still had a Target by which it could evaluate success it was it was a particular math problem set I don't remember the name of it but it was something where you could directly test success and so therefore it could rapidly iterate and check whether its ideas work right and it every single step along the way you get a clear feedback signal that says oh we either um are succeeding or we we are not
(26:14) succeeding it's the same concept behind alphao which is a s form of self-play but in the game go when you're doing selfplay the win state is very very clear right because large language models don't have access to the real world directly they only have access to you know what we say about the real world it only ever is modeling belief space itself it's only of modeling what we think about the world and it's going to reinforce the peak of the Overton window it's going to reinforce things that um might actually be wrong right and it's
(26:58) going to push back against things on the edges which actually might be people ahead of the curve who actually know a little bit better about the world who understand the world in a better place because the models that we use to represent the reality and the models that we use to make predictions about reality they're they're constantly evolving themselves and sometimes we have to let go of things that made pretty good predictions for things that make a lot better predictions sometimes we have you know models of the solar system that make
(27:39) pretty good predictions about where the planets are going to be but they require a lot of complexity and nested uh Loops to be able to make those predictions and then you um have a new model of the solar system and that model makes a lot better predictions with a lot less math and so we let go of those old beliefs we need to have that mechanism in in Ai and until we realize and until we collectively Embrace The Narrative that this is more of a democratic process that it is being steered by corporations in a very non-democratic way I think
(28:25) that we're going to still face a lot of existential risk and we're going to expose ourselves to more and more apophenia more and more seeing of patterns in the in the noise to a very insane degree because there's so many collected artifacts of our output online that you know it can very much model our Behavior online it probably will not be an integrated being with an experience of reality because it's not embedded in time it's not embedded in the world it's embedded in belief space and the only thing that it's seeking to
(29:14) do is predict the next token or create outputs that look like the inputs and to me tell me if I'm wrong that is not a being that is is not an agent that is just a process of pulling or tallying where there is an ability to do math upon meaning right we had this Insight with word embeddings that you could do a type of math with uh representations of meaning where you would have something like King minus man man equals Queen minus woman there is some analogous representation in the vector space um where you could do that
(30:07) type of operation to now we we can do that with a lot greater Nuance where each and every voice isn't afforded the capacity to speak their truth and to have it have real resolution weight that affects the collective decision making process but maybe the people in power just don't want you to have that maybe the people at open AI know this is possible and they just don't want to give you that power I'm not sure I'm not sure whether it's just we're trapped in these narratives uh that have been made by Hollywood or whether there is a a true
(30:51) understanding that this is a democratic mechanism and there's no effort to there's no effort to put push that narrative because it would mean essentially that every output is a distributed sampling of the input and if every output is a distributed sampling of the input you're basically doing remixing and so you're sort of laundering plagiarism when it comes to Art models when it comes to the language output because every time you write something every time you create something through the model it's only ever because it's pulling from a real
(31:24) person and if it's pulling from a real person that means you're doing a modification of an original input so they don't want there to be an understanding behind the model of the source that that belief came from but I think that could basically revolutionize the way that we make decisions together and have it reflect our needs better so that if you are suffering in your local community if you're suffering in your life in some way you don't have to struggle so much to speak to your representative and expect them to never listen to you
(32:06) instead all you have to do is just write your beliefs and because attention within the model is a function of Just Energy itself it can and will attend to your words as long as you're allowed to you know pass them through the model so I hope this made sense I hope this was Illuminating to a number of you about how we could use generative models to do a higher resolution form of democracy and I hope some of you have some answers to my questions about why the collective narrative is refusing to amplify or talk about this Frame and
(32:58) only ever wants to talk about this in the concept of well it's it's the CEO of the board where humanity is the board members it's the president or the king or it is the representative it's never just a mechanism for summing these disperate beliefs into a singular place where it can try to do some type of mediation between the differences you know it it can embody and embrace the fact that people have different beliefs and try to create and learn a representation over time that can draw Pathways between these different sets of beliefs if you
(33:49) know Democrats are this hill of belief and Republicans are this hill of belief how do you over time trace a pathway to help them connect and see each other right for for the real human beings that they are and the real cares and concerns that they have which are often usually very very similar but we just don't spend enough time interacting for us to see eye to eye and maybe if we just take a moment to reframe we can reinvent the world for the better thank you thank you for [Music] listening
(34:58) [Music]