8 Comments
Jun 13, 2023Liked by Brian Chau

"a combination of empirical data and a large amount of explicit human commentary"

The problem, is the larger part of human commentary is done by those Joel Spolsky calls 'smart people who write white papers.' Smart people who write lots of white papers are vastly different from the people Joel calls 'smart people who get things done.'

Expand full comment
author

Yes. I'd love to see large databases of CEO emails opened up for LLM training.

Expand full comment

Joel Spolsky would be a great pod cast guest. Joel ran the development of Microsoft Excel in the early days. They actually wrote their own compiler too. Go read Joel's archives.

https://www.joelonsoftware.com/archives/

Expand full comment
Jun 12, 2023·edited Jun 12, 2023Liked by Brian Chau

On the basic issue of market coordination, i.e. central vs. decentralized planning: Of course AI isn't going to undermine the idea that decentralization works better than centralization because the knowledge required for society to coordinate is distributed and constantly being added to, with not all of that information public.

Its true that technocratic idealists keep hoping better computing power will enable central planning, better computing power is also being used to aid all the participants in the markets. Even if somehow you did have a centralized super-AGI that could model the thinking of individual humans: presumably by that point those humans have AGI to aid them.

In addition all the interactions between all those decentralized players add even more complexity and possibilities, a centralized entity can't deal with it. It can't predict all the things humans+AGI can invent, or how their values or priorities may change.

No single person or small group knows as much as the cumulative knowledge of a large group of people . If AI and then AGI tools aid individuals they will change how they plan and make decisions. While centralized computing power like supercomputers governments may create may be more powerful than most corporations (these days some corporations may outdo them), they aren't a match for the decentralized combined computing power of humans aided by AI.

Expand full comment
Jun 11, 2023·edited Jun 11, 2023Liked by Brian Chau

re: " Through centuries of cultural evolution, social institutions developed which balance explicit statements and implicit, hidden sentiments. "

And these LLMs are trained on the writings that trace that evolution and embody those implicit hidden sentiments to at least some degree. The issue to some extent is the compression issue, whether averaging together all that information essentially lowered the resolution at which that implicit hidden sentiment is embodied so its difficult to get at, or whether the models are large enough to have retained that to be able to access it.

Regardless: the point that our world does consist of these many varied cultures should be an issue for concern when the focus seems to be on creating a single one size fits all AI rather than a plurality. There seem to be technocratic idealists that assume there can be global agreement on values and culture they can embody into these, or perhaps naively assume that "democratic input" done their way can find the "right" culture, when the current democratic systems lead to Americans struggling to agree on what culture to teach in schools. They ignore issues of tyranny of the majority that can arise or "dictatorship of the most intolerant minority", etc. Consider the global culture war to agree on "should it portray images of Mohammed?", world politics and not just American, differing religious sects, etc.

EAs and others striving for rational worldviews likely fall into the camp of hoping for a fully rational one size fits all AI: that that is unlikely to be something the world market will accept. Many values aren't matters of logic, there are many cultural issues that are arbitrary.

The current pragmatic approach would seem to be to have a plurality of AIs that have an interface trained for particular subcultures (or a chameleon AI as the page below puts it that can take on different masks). That might be done either finding ways to pull out subculture data and train an AI (or facet of one) for particular subcultures. Or it might be a distributed crowdsourced "democracy" as guidance rather than control. The Hayekian cultural evolution process applied in a sense but faster, applied to myriad AIs exposed to particular subcultures either to bring out what they embody, or to allow subcultures to nudge them to fit. Each individual is a subculture, but most people start with defaults from the cultures they were raised in so its a start. This page details this:

https://RainbowOfAI.com

Expand full comment
Jun 11, 2023·edited Jun 11, 2023Liked by Brian Chau

re: "While AI can model scenarios and make predictions, it lacks the capacity to understand and navigate the cultural nuances and historical contexts that underpin human institutions."

Again: there is a difference between a Spock type AI that engages in such modeling, and the sort of AI being trained today. This AI is explicitly trained on lots of human communication from and about varied cultures. Its main utility at the moment is in aiding writing, including mimicking cultural products like writing new Seinfeld scenes updated for issues in 2023, or writing what George Orwell might have said about AI regulation if he were alive today.

The issue is partly finding ways to prompt the AIs to access that information. Unfortunately human training to follow instructions sometimes embodies modern sentiments. One AI (a less advanced one off the llama leaks) when asked to write something Thomas Jefferson would have written came out as if he were a progressive big government woke climate alarmist.

Recent work shows these LLMs are often better than Google Translate when translating language. If those training the systems don't make it difficult to access this cultural information, perhaps these LLMs can help serve to translate between the varied political subcultures, explaining to a progressive reporter tasked with writing for a mainstream news outlet in a neutral fashion how a conservative might react to their story and why. The page https://FixJournalism.com goes into that possibility, citing JS Mill's point that people should be exposed to arguments from those who actually believe them: and the LLMs embody the voices of believers of varied viewpoints, if they can be brought out rather than squashed.

Expand full comment
Jun 11, 2023·edited Jun 11, 2023Liked by Brian Chau

It seems like many people are concerned about AGI due to the sudden appearance of emergent behavior in LLMs. Many of us would argue thats not a likely path to AGI, but they extrapolate from it. Humans are considered GI, general intelligence: despite the existence of cognitive biases and other reasoning flaws.

It seems worth noting that you seem to be talking about AGIs that contain better reasoning capabilities. That may or may not be the case in the first AGIs, even if the hope is there will be better ones in the future. The hope is their lack of our flaws is part of what lets them achieve superhuman intelligence: though as with humans, it may be that some can achieve great things in one area while being flawed in other things (as obviously LLMs are now).

So distinction might be made between future AI more likely to get to a more superhuman AGI that may be able to embody a more fully reasoned world view vs AI that may suffer from our flaws precisely because it was trained on humanity's output.

since re: "AI, on the other hand, is arguably less susceptible to these cognitive distortions. AI achieves this through its capacity to aggregate large amounts of data. "

The knowledge these LLMs are trained on consists of human writings that contain flawed arguments embodying human cognitive biases. A thread on a new paper this week (Robin Hanson retweeted) explored why LLMs seem to exhibit flaws in certain type of reasoning but succeed with others and suggests:

https://threadreaderapp.com/thread/1663936258991853575.html

" We find that GPT3, ChatGPT, and GPT4 cannot fully solve compositional tasks even with in-context learning, fine-tuning, or using scratchpads. To understand when models succeed, and the nature of the failures, we represent a model’s reasoning through computation graphs.

We show that Transformers' successes are heavily linked to having seen significant portions of the required computation graph during training! "

It may be that new ways of training models can address the issue, or it may take a more complicated architecture and form of training (as Yann Lecun seems to think) or or a symbolic/neural hybrid (which may be a shorter path, even if its not the long term approach).

So currently these LLMs aren't reasoning even in usage, nor are they reasoning during the training phase. When fed all that data they aren't reasoning about the quality of all that data and somehow choosing the best of it, but are instead embodying the aggregate data with all the cognitive flaws embedded in it. I guess the hope of many is they somehow average out and the logic that is embedded implicitly in most statements will win out over those flaws. It seems unclear that'll happen with the current approach which is learning on lots of information that is contradictory, or whether new training methods or architecture will be needed.

Expand full comment