Sorry, I should have first posted that this is great and it is useful to do, thanks for doing it. If there is a way to combat AGI fears, it does help. My immediate reaction from decades of debating true believers on various topics (from nonsense like homeopathy to socialists) was just that even if this is fully logical and accurate: long arguments leave room for true believers to find some way to rationalize a loophole.
There are so many uncertainties regarding predicting the future that I suspect its not going to be easy to get those driven by fear to think clearly enough to set it aside, unless/until there are simple short arguments. Of course sometimes the long arguments are needed first before people can figure out how to compress them, ala the famous saying "if i had more time this letter would be shorter".
Its just that there aren't enough people concerned with combatting the threat of regulations, and its a shame if AGI debate distracts attention from the difficult near term job of finding a way to head those off.
I agree AGI isn't imminent. Yann LeCun, Meta's head of AI, seems to suggest its not worth his time debating the more extreme doomsayers since there are more realistic immediate concerns to focus on. To those concerned about regulation (unfortunately he is a pro-regulation type) its an interesting theoretical debate distracting from the near term. The FTC complaint calling for them to regulate AI heavily, OpenAI pushing for near term regulations, etc.
Perhaps AGI fears may heighten concerns that underly some of that: but it it seems focusing on the near term may be more of use than getting distracted by AGI debates about something that is very unlikely imminent. Those who believe it isn't imminent should believe those debates can be postponed (or left up to those who don't have an interest in fighting regulations), even if its a shame some sharp folks waste time on AGI fears in the meantime.
While its unfortunate that there are some sharp people in doomsayer mode regarding AGI that get media coverage: it seems thats a distraction from the near term threat of regulation for non-AGI, and other more timely AI issues like pluralism. OpenAI's request for helping "democratically" decide AI steerage is essentially pushing the idea of regulation via that process, whether voluntary at first but then perhaps with a push for "democratic" governments to adopt it.
Then perhaps a push to use those democratic processes designed to limit AI speech as an excuse to find a way apply them to human speech (at least where there is no 1st amendment, while they figure out ways around that here and hope that big tech falls in line to comply with this "democratic" process).
Or the OpenAI democracy push may indicate a a possible implicit hope that maybe they can head towards a monopoly that is run in a "good" fashion: and push Microsoft to prevent add-ons that allow other AIs to be plugged into their office suite, search engine, etc, so that people use the "good" AI. Or at least require all AIs they allow (and Google allows in their office suites, and their AIs) to comply with this "democratic" process.
I suspect many were taken by surprise by the level of emergent behavior in LLMs. All projections of the future are problematic, ala Alan Kay's "The best way to predict the future is to invent it". Those concerned with AGI will merely fall back on the non-falsifiable prior that someone may invent something leading to equally unexpected advances. Unfortunately prior trends aren't guarantees of the future: which leaves true believers loopholes that its unclear can easily be squashed by long arguments with lots of data about trends that may change.
There are other near term issues. The article https://FixJournalism.com on fixing mainstream journalism by using AI to nudge the mainstream news to be more neutral to help them gain back trust, mentions the issue of AIs that summarize the news: which undermines the advertising revenue from the news outlets. That article was drawing attention to how AI can help the news, but it can harm it in ways that may lead them to call for regulation. All news will need to be behind paywalls, or there isn't any revenue to generate the "free news" for AI, but of course some even US news can come from free NPR or even foreign outlets like the BBC. The news media has already been pushing for regulations to hand them a piece of Google&Facebook et al's revenue, and for bailouts and government funding.
The real concern is human alignment in the near term, in terms of mindset regarding regulations. That site trying to point out the analogies between regulating speech and regulating AI is an attempt to try to get some in the public who still value free speech to consider protecting AI speech like human speech, https://PreventBigBrother.com
Perhaps that isn't the approach that'll work: but there need to be varied attempts to try to figure out what might get traction with the general public or politicians.
I agree that this article does nothing to assuage my concerns about existing ML tools being used for harm by humans, particularly by states. As you suggest there isn't really a technical solution for this problem.
Still, I'm actually optimistic on the impact that AI will have on government. I think that by greatly reducing the marginal cost of writing and style-matching (and broadly, "persuasion-based skills"), it will increase the relative value of reality-based skills (creating policy which actually works). I'll likely write a much longer version of this in a future article.
Thanks for the reply, as I noted in a followup post I should have started by thanking you for the great work you did. I just worry that AGI debates will distract everyone from the near term harms. The most near term would be states harming the progress of AI (rather than free states using AI to harm people) when AI can have great benefits.
China may be imminently using AI to censor, but thats not yet happening in the US or other mostly free countries: but of course thats the next stage, even without regulation. Again: it seems like the OpenAI call for grants to steer AI using a democratic process may lead to calls to use that same process to steer human speech, either adopted voluntarily by big tech or via regulatory pressure (even if not the US, say the EU doing that and then they give in worldwide to it).
Perhaps OpenAI's call for grants may lead to some way to inject a notion of plurarity (you choose which AI you prefer based on their particular form of democratic control, or lack thereof, rather than one size fits all universal "democracy"), so there is still hope.
That article on fixing the news media mentions in passing the prospect of journalists using AI to comb through government documents to find problems more efficiently which can have a positive impact on government.
re: "creating policy, that actually works" depends on how you define "works" in a society with varying values, one reason libertarians try to privatize concerns. However the hope is that its easier to show that government policies don't work, or that private policies work better. Putting the burden of proof on government to demonstrate a policy works (or is likely to) would likely reduce state power.
It may be that there can be push to reform government in general, that site that referenced pluralism TheBigIllusion.com is like your article here: a long diatribe with logic and evidence behind it that its unclear how many will read since it requires questioning some assumptions. In that case demonstrating some fundamental flaws in our current electoral process that leads indirectly to some of the political dysfunction we see spilling over to society in general that is likely to get worse unless its addressed. Addressing it may lead to the prospect of other reforms improving government processes: or things getting worse.
Excellent stuff here Brian. My thoughts on the hard push for regulation we’re hearing from the mega caps is because it’s an easy way for them to maintain a lead in the space. It already cost millions to train a large model, layer in government regulations and it’s a sure fire way for the incumbents to maintain dominance.
One point that’s not quite clear from the article: do you think doomers are wrong even conditional on AGI being developed soon? Or do you think we have reasons to be optimistic even in case timelines turn out to be short? (Which I understand you believe to be improbable, but presumably not completely impossible.)
Great article, and very comprehensible even for someone without a deep technical background in the ML field. Looking forward to the future installments.
Sorry, but you talk endlessly about hardware from a not very ML educated perspective, and where are any of your actual arguments? What can you even argue for? 1. that Moore's Law will break (hahaha good luck with that). 2. LLM plateu stage (yeah maybe, but 2045? come on that's ridiculous). 3. nothing. To the contrary, the open source revolution we are seeing with LLAMA now has shown that we are making huge leaps forward in reduction of parameter size and hence required compute power, i.e. that exponential growth might be an *underestimation* if you combine Moore's Law with global collective innovation. Also more compute power always directly translates to better model performance. The only thing that is really holding AGI back is lack of compute, not invention. This is very very clear. Takeaway: AGI might happen next year, if some rich wiz-kid rigs up something huge in his moms basement. This is where we are now, as ridiculous as that sounds.
Not sure you addressed the "innovation leading to dramatically reduced parameter size" argument.
However, OP is a little over-optimistic re Moore's Law which is pretty much already "broken" and only being kept alive thanks to the moving of goalposts.
For graphics cards, processing power has always doubled almost every year, instead of every two years. So the exact opposite is true, which makes the "argument" even more ridiculous.
GPUs are still bound by Moore's Law which Huang himself has stated is "dead", and regardless has been slowing for years. There are gains to be had in parallelization and specialization, but there's a bound on these as well and much of them have already been delivered.
I think you are missing the point here, that GPUs gain processing power at almost twice the speed than Moore's law. So gains in compute power have not slowed, they have further accelerated. You add all the specialization mentioned in the article to that, and you only get even further acceleration beyond Moore's law. Now you have people being able to train davinci-003 alike LLMs on their home RTX 4090 for $300, something that did cost tens of millions of dollars in investment not even 3 years back. The whole article is totally backwards and pointless, if it was meant to argue against the basic reality of the situation, that technology grows at exponential speed.
You can listen to Jenson whining about losing the competitive edge against Intel all you want. The facts are that Moore's Law has not only delivered, but deliveres almost twice over in the GPU space. There is not a single sign of it slowing down. Transistors still shrink exponentially, and speeds still increase exponentially.
This was an interesting post and I’m looking forward to the other two, if you write them (or did I already miss them?). I’m especially interested in how you wound up 95% confident that we won’t have AGI by 2043 by a path that involves substantially different AI algorithms than we are using today. In particular, 2043 is as far into the future as 2003 was in the past. And an awful lot has happened in AI since 2003—not just more compute but also new algorithmic ideas / paradigms, both within deep learning (which I believe was an obscure backwater in 2003; this is almost a decade before AlexNet, forget about ResNets, deep Q learning, BatchNorm, PPO, transformers, etc. etc.) and also outside deep learning (e.g. probabilistic programming is wildly more advanced today than 2003). Like, there’s a “scaling law” describing how LLM competence depends on training compute and data. But who’s to say that a future very different non-LLM AI algorithm couldn’t build a radically more competent AI out of the same training compute as today’s LLMs do?
In this post of mine — https://www.alignmentforum.org/posts/KJRBb43nDxk6mwLcR/ai-doom-from-an-llm-plateau-ist-perspective — I mostly agree with the main message of this post, except for (1) I argue that it does not imply that AGI concerns are overstated, (2) I suggest that AGI in the next 10 or 20 years is nevertheless quite plausible (I’d go way higher than 5%), per my previous paragraph.
I liked this short era in which the best way to getting more intelligence was to feed it nicely measurable stuff: human-written data, parameters, compute. It's like some sort of resource pipeline in a game: insert resources, go up on the tech tree. But asking for it to continue indefinitely just so our world remains simple is false hope. What Sutton says is not that algorithms suck; it's that making intelligence work the way researchers think their own minds work is an uphill battle, with all the gains washed out by scale. What does not wash out at scale? Search, and learning. Both can and will be improved, and it's *very* hard to predict when and how.
There are some things that cannot be modeled by grounding to empirical reality. It is not only due to randomness and the research advances being unpredictable. No, it is also due to extraordinary power of choices that certain small groups of actors will make. Any readily available facts about the world can only help model the future outcomes that do not depend a lot on a small set of decisions.
Disclaimer: I am no expert but an ordinary nonML programmer, I have no inside information
A) How to get started understanding TPU benchmarks:
1) Compare fp32 vs fp16. These are both widely used data types, so there should be really good information out there about them. My personal opinion: designing a chip for fp32 is different than fp16, and you can get more operations per second out of fp16.
3) A possible explanation for why Google’s TPU metrics look like outliers is that they may be measuring using fp16 or fp8
4) I think there's a debate between people who say quantization doesn't actually reduce training cost, but it is used pretty frequently on the inference side, so it probably is meaningful there. Google marks their TPUv4 as "inference-only".
Note: none of the below is significant enough to outweigh the "slowing down" hypothesis, but maybe it is enough to say "we're not THAT close to the very top of the S curve"
B) Personal opinions on progress
1) Because most of the biggest labs have gone closed-source (except for Facebook and UAE), that has slowed down progress and that's probably not reversing
2) Because of closed-source, it's possible that there may be AI model capability advances that we aren't seeing because they are too expensive to be economical, but in the future as people get better at using AI models, they may be willing to pay more to get more performance (because they will have learned how to better put more performance to good use)
C) AI hardware performance is also dependent on memory throughput.
1) Apple's M1 chip got surprisingly great benchmarks compared to older Apple chips because of increased memory throughput.
2) Cerebras is making much bigger chips, which I think also helps with memory throughput (but their benchmarks are not public).
3) It's possible that NVIDIA could copy Apple and Cerebras techniques to get better memory throughput
4) Stacked memory (SRAM) on top of compute transistors may be coming in lieu of smaller/faster compute transistors in future generations of hardware (https://fuse.wikichip.org/news/5531/amd-3d-stacks-sram-bumplessly/) (I don't know if better memory throughput really outweighs the lack of faster compute transistors though)
D) Learning by doing and competition can bring margins down
1) AMD has money now thanks to their advantage over Intel in the CPU market, and so can invest in a competitor to CUDA, and competition could bring prices down
2) If Moore's Law is ending, then learning by doing could lead to TSMC bringing down prices through better production efficiency (eg steps currently done by humans could perhaps instead be done by machine), and other competitors (eg Samsung, maybe Intel) will eventually catch up and enter the high-end GPU fabrication space and lower prices through competition
E) Software progress is not just on the training end but also on the inference end
1) Anthropic came out with a 100K context window, that doesn't affect the training costs, but it means a more capable model. Based on the previously-thought economics of inference, as context window goes up n, cost goes up n^2, so Anthropic almost certainly made an advance here (the other option is that every other company except Anthropic is way-overcharging)
2) If inference cost reductions (eg quantization) and inference capability measures (eg bigger context windows) improve, that makes training larger models make more economic sense.
I'm curious on your thoughts w/ regards to GPU's vs specialized hardware especially FPGAs. I'm not necessarily claiming that FPGAs will be a driver of cutting edge AI development, but I do wonder if they have a significant role to play. Microsoft for example has an interesting project:
Reading the article again, I’m left unconvinced, mainly because I think it fails to argue convincingly for any actual hard limit to ML progress along the lines it discusses.
Even if the current semiconductor technology turned out to be the final limit of what’s possible, there would still be plenty of room for making the same technology cheaper and more widely available. And even if programming models remained difficult and demanding special skills, the key players can surely find enough smart people capable of grappling with them to drive further progress.
Indeed, even if nobody managed to design a better chip than the A100 ever again, I’m sure that in not very many years we’d nevertheless end up with A100s (or equivalents) available for $1000, then $100, then probably $10 or even $1, instead of the current $10K or whatever. The astronomical sums needed to train the current state of the art models will inevitably fall by several orders of magnitude, even if technology makes no fundamental progress and we only see more of the same kinds of fabs but in a larger and more competitive market. It’s the same for not just chips, but also all other technologies needed to connect them on a large scale.
And while people skilled at parallel programming and intricate low-level optimization are rare, they’re not *that* rare. It’s far beyond the skills of an average programmer, but the big players are definitely able to hire teams of people who can deliver hand-optimized large-scale parallel applications on demand quickly.
So in the end, any argument against short AGI timelines would still need to hold in a hypothetical world where vast clusters of A100s can be obtained readily for couch change, and every big player has expert teams readily to write and hand-optimize applications for them on short order. Because that is indeed the world we’re likely to be in fairly soon, certainly much earlier than the 2040s.
You made a bunch of useless noise and then claimed a completely ridiculous number. And then refused to answer some very simple questions.
Once again:
Did you take a year to ship a simple feature? Do you do 10 a day now?
And then your reply is even more full of noise. Completely detached from reality, only focused on how you feel. It doesn't matter if you feel "100x". Maybe heroin is the real AGI that you're looking for.
Sorry, I should have first posted that this is great and it is useful to do, thanks for doing it. If there is a way to combat AGI fears, it does help. My immediate reaction from decades of debating true believers on various topics (from nonsense like homeopathy to socialists) was just that even if this is fully logical and accurate: long arguments leave room for true believers to find some way to rationalize a loophole.
There are so many uncertainties regarding predicting the future that I suspect its not going to be easy to get those driven by fear to think clearly enough to set it aside, unless/until there are simple short arguments. Of course sometimes the long arguments are needed first before people can figure out how to compress them, ala the famous saying "if i had more time this letter would be shorter".
Its just that there aren't enough people concerned with combatting the threat of regulations, and its a shame if AGI debate distracts attention from the difficult near term job of finding a way to head those off.
I agree AGI isn't imminent. Yann LeCun, Meta's head of AI, seems to suggest its not worth his time debating the more extreme doomsayers since there are more realistic immediate concerns to focus on. To those concerned about regulation (unfortunately he is a pro-regulation type) its an interesting theoretical debate distracting from the near term. The FTC complaint calling for them to regulate AI heavily, OpenAI pushing for near term regulations, etc.
Perhaps AGI fears may heighten concerns that underly some of that: but it it seems focusing on the near term may be more of use than getting distracted by AGI debates about something that is very unlikely imminent. Those who believe it isn't imminent should believe those debates can be postponed (or left up to those who don't have an interest in fighting regulations), even if its a shame some sharp folks waste time on AGI fears in the meantime.
While its unfortunate that there are some sharp people in doomsayer mode regarding AGI that get media coverage: it seems thats a distraction from the near term threat of regulation for non-AGI, and other more timely AI issues like pluralism. OpenAI's request for helping "democratically" decide AI steerage is essentially pushing the idea of regulation via that process, whether voluntary at first but then perhaps with a push for "democratic" governments to adopt it.
Then perhaps a push to use those democratic processes designed to limit AI speech as an excuse to find a way apply them to human speech (at least where there is no 1st amendment, while they figure out ways around that here and hope that big tech falls in line to comply with this "democratic" process).
Or the OpenAI democracy push may indicate a a possible implicit hope that maybe they can head towards a monopoly that is run in a "good" fashion: and push Microsoft to prevent add-ons that allow other AIs to be plugged into their office suite, search engine, etc, so that people use the "good" AI. Or at least require all AIs they allow (and Google allows in their office suites, and their AIs) to comply with this "democratic" process.
I suspect many were taken by surprise by the level of emergent behavior in LLMs. All projections of the future are problematic, ala Alan Kay's "The best way to predict the future is to invent it". Those concerned with AGI will merely fall back on the non-falsifiable prior that someone may invent something leading to equally unexpected advances. Unfortunately prior trends aren't guarantees of the future: which leaves true believers loopholes that its unclear can easily be squashed by long arguments with lots of data about trends that may change.
There are other near term issues. The article https://FixJournalism.com on fixing mainstream journalism by using AI to nudge the mainstream news to be more neutral to help them gain back trust, mentions the issue of AIs that summarize the news: which undermines the advertising revenue from the news outlets. That article was drawing attention to how AI can help the news, but it can harm it in ways that may lead them to call for regulation. All news will need to be behind paywalls, or there isn't any revenue to generate the "free news" for AI, but of course some even US news can come from free NPR or even foreign outlets like the BBC. The news media has already been pushing for regulations to hand them a piece of Google&Facebook et al's revenue, and for bailouts and government funding.
The real concern is human alignment in the near term, in terms of mindset regarding regulations. That site trying to point out the analogies between regulating speech and regulating AI is an attempt to try to get some in the public who still value free speech to consider protecting AI speech like human speech, https://PreventBigBrother.com
Perhaps that isn't the approach that'll work: but there need to be varied attempts to try to figure out what might get traction with the general public or politicians.
I agree that this article does nothing to assuage my concerns about existing ML tools being used for harm by humans, particularly by states. As you suggest there isn't really a technical solution for this problem.
Still, I'm actually optimistic on the impact that AI will have on government. I think that by greatly reducing the marginal cost of writing and style-matching (and broadly, "persuasion-based skills"), it will increase the relative value of reality-based skills (creating policy which actually works). I'll likely write a much longer version of this in a future article.
Thanks for the reply, as I noted in a followup post I should have started by thanking you for the great work you did. I just worry that AGI debates will distract everyone from the near term harms. The most near term would be states harming the progress of AI (rather than free states using AI to harm people) when AI can have great benefits.
China may be imminently using AI to censor, but thats not yet happening in the US or other mostly free countries: but of course thats the next stage, even without regulation. Again: it seems like the OpenAI call for grants to steer AI using a democratic process may lead to calls to use that same process to steer human speech, either adopted voluntarily by big tech or via regulatory pressure (even if not the US, say the EU doing that and then they give in worldwide to it).
Perhaps OpenAI's call for grants may lead to some way to inject a notion of plurarity (you choose which AI you prefer based on their particular form of democratic control, or lack thereof, rather than one size fits all universal "democracy"), so there is still hope.
That article on fixing the news media mentions in passing the prospect of journalists using AI to comb through government documents to find problems more efficiently which can have a positive impact on government.
re: "creating policy, that actually works" depends on how you define "works" in a society with varying values, one reason libertarians try to privatize concerns. However the hope is that its easier to show that government policies don't work, or that private policies work better. Putting the burden of proof on government to demonstrate a policy works (or is likely to) would likely reduce state power.
It may be that there can be push to reform government in general, that site that referenced pluralism TheBigIllusion.com is like your article here: a long diatribe with logic and evidence behind it that its unclear how many will read since it requires questioning some assumptions. In that case demonstrating some fundamental flaws in our current electoral process that leads indirectly to some of the political dysfunction we see spilling over to society in general that is likely to get worse unless its addressed. Addressing it may lead to the prospect of other reforms improving government processes: or things getting worse.
Excellent stuff here Brian. My thoughts on the hard push for regulation we’re hearing from the mega caps is because it’s an easy way for them to maintain a lead in the space. It already cost millions to train a large model, layer in government regulations and it’s a sure fire way for the incumbents to maintain dominance.
One point that’s not quite clear from the article: do you think doomers are wrong even conditional on AGI being developed soon? Or do you think we have reasons to be optimistic even in case timelines turn out to be short? (Which I understand you believe to be improbable, but presumably not completely impossible.)
Great article, and very comprehensible even for someone without a deep technical background in the ML field. Looking forward to the future installments.
Sorry, but you talk endlessly about hardware from a not very ML educated perspective, and where are any of your actual arguments? What can you even argue for? 1. that Moore's Law will break (hahaha good luck with that). 2. LLM plateu stage (yeah maybe, but 2045? come on that's ridiculous). 3. nothing. To the contrary, the open source revolution we are seeing with LLAMA now has shown that we are making huge leaps forward in reduction of parameter size and hence required compute power, i.e. that exponential growth might be an *underestimation* if you combine Moore's Law with global collective innovation. Also more compute power always directly translates to better model performance. The only thing that is really holding AGI back is lack of compute, not invention. This is very very clear. Takeaway: AGI might happen next year, if some rich wiz-kid rigs up something huge in his moms basement. This is where we are now, as ridiculous as that sounds.
Every single one of these fallacies are directly addressed in the article.
Those are not fallacies. What you argue simply does not change the basic facts, not even remotely. Also please read this intentionally leaked document by Google https://www.semianalysis.com/p/google-we-have-no-moat-and-neither .
Not sure you addressed the "innovation leading to dramatically reduced parameter size" argument.
However, OP is a little over-optimistic re Moore's Law which is pretty much already "broken" and only being kept alive thanks to the moving of goalposts.
For graphics cards, processing power has always doubled almost every year, instead of every two years. So the exact opposite is true, which makes the "argument" even more ridiculous.
GPUs are still bound by Moore's Law which Huang himself has stated is "dead", and regardless has been slowing for years. There are gains to be had in parallelization and specialization, but there's a bound on these as well and much of them have already been delivered.
I think you are missing the point here, that GPUs gain processing power at almost twice the speed than Moore's law. So gains in compute power have not slowed, they have further accelerated. You add all the specialization mentioned in the article to that, and you only get even further acceleration beyond Moore's law. Now you have people being able to train davinci-003 alike LLMs on their home RTX 4090 for $300, something that did cost tens of millions of dollars in investment not even 3 years back. The whole article is totally backwards and pointless, if it was meant to argue against the basic reality of the situation, that technology grows at exponential speed.
You can listen to Jenson whining about losing the competitive edge against Intel all you want. The facts are that Moore's Law has not only delivered, but deliveres almost twice over in the GPU space. There is not a single sign of it slowing down. Transistors still shrink exponentially, and speeds still increase exponentially.
This was an interesting post and I’m looking forward to the other two, if you write them (or did I already miss them?). I’m especially interested in how you wound up 95% confident that we won’t have AGI by 2043 by a path that involves substantially different AI algorithms than we are using today. In particular, 2043 is as far into the future as 2003 was in the past. And an awful lot has happened in AI since 2003—not just more compute but also new algorithmic ideas / paradigms, both within deep learning (which I believe was an obscure backwater in 2003; this is almost a decade before AlexNet, forget about ResNets, deep Q learning, BatchNorm, PPO, transformers, etc. etc.) and also outside deep learning (e.g. probabilistic programming is wildly more advanced today than 2003). Like, there’s a “scaling law” describing how LLM competence depends on training compute and data. But who’s to say that a future very different non-LLM AI algorithm couldn’t build a radically more competent AI out of the same training compute as today’s LLMs do?
In this post of mine — https://www.alignmentforum.org/posts/KJRBb43nDxk6mwLcR/ai-doom-from-an-llm-plateau-ist-perspective — I mostly agree with the main message of this post, except for (1) I argue that it does not imply that AGI concerns are overstated, (2) I suggest that AGI in the next 10 or 20 years is nevertheless quite plausible (I’d go way higher than 5%), per my previous paragraph.
I hope you continue this Newsletter Brian.
I liked this short era in which the best way to getting more intelligence was to feed it nicely measurable stuff: human-written data, parameters, compute. It's like some sort of resource pipeline in a game: insert resources, go up on the tech tree. But asking for it to continue indefinitely just so our world remains simple is false hope. What Sutton says is not that algorithms suck; it's that making intelligence work the way researchers think their own minds work is an uphill battle, with all the gains washed out by scale. What does not wash out at scale? Search, and learning. Both can and will be improved, and it's *very* hard to predict when and how.
There are some things that cannot be modeled by grounding to empirical reality. It is not only due to randomness and the research advances being unpredictable. No, it is also due to extraordinary power of choices that certain small groups of actors will make. Any readily available facts about the world can only help model the future outcomes that do not depend a lot on a small set of decisions.
(Or, for those who really like evidence, here's some weak evidence: https://www.alignmentforum.org/posts/CfpAXccrBvWpQw9xj/algorithmic-improvement-is-probably-faster-than-scaling-now. But any evidence, in either direction, does not help to reliably predict how the tech tree looks 10 years from now. The time of Popper is long gone.)
Disclaimer: I am no expert but an ordinary nonML programmer, I have no inside information
A) How to get started understanding TPU benchmarks:
1) Compare fp32 vs fp16. These are both widely used data types, so there should be really good information out there about them. My personal opinion: designing a chip for fp32 is different than fp16, and you can get more operations per second out of fp16.
2) Google still uses TensorFlow, and documentation at https://www.tensorflow.org/lite/performance/post_training_quantization mentions "full integer quantization 4x smaller", which means it could perform almost like fp8.
3) A possible explanation for why Google’s TPU metrics look like outliers is that they may be measuring using fp16 or fp8
4) I think there's a debate between people who say quantization doesn't actually reduce training cost, but it is used pretty frequently on the inference side, so it probably is meaningful there. Google marks their TPUv4 as "inference-only".
Note: none of the below is significant enough to outweigh the "slowing down" hypothesis, but maybe it is enough to say "we're not THAT close to the very top of the S curve"
B) Personal opinions on progress
1) Because most of the biggest labs have gone closed-source (except for Facebook and UAE), that has slowed down progress and that's probably not reversing
2) Because of closed-source, it's possible that there may be AI model capability advances that we aren't seeing because they are too expensive to be economical, but in the future as people get better at using AI models, they may be willing to pay more to get more performance (because they will have learned how to better put more performance to good use)
C) AI hardware performance is also dependent on memory throughput.
1) Apple's M1 chip got surprisingly great benchmarks compared to older Apple chips because of increased memory throughput.
2) Cerebras is making much bigger chips, which I think also helps with memory throughput (but their benchmarks are not public).
3) It's possible that NVIDIA could copy Apple and Cerebras techniques to get better memory throughput
4) Stacked memory (SRAM) on top of compute transistors may be coming in lieu of smaller/faster compute transistors in future generations of hardware (https://fuse.wikichip.org/news/5531/amd-3d-stacks-sram-bumplessly/) (I don't know if better memory throughput really outweighs the lack of faster compute transistors though)
D) Learning by doing and competition can bring margins down
1) AMD has money now thanks to their advantage over Intel in the CPU market, and so can invest in a competitor to CUDA, and competition could bring prices down
2) If Moore's Law is ending, then learning by doing could lead to TSMC bringing down prices through better production efficiency (eg steps currently done by humans could perhaps instead be done by machine), and other competitors (eg Samsung, maybe Intel) will eventually catch up and enter the high-end GPU fabrication space and lower prices through competition
E) Software progress is not just on the training end but also on the inference end
1) Anthropic came out with a 100K context window, that doesn't affect the training costs, but it means a more capable model. Based on the previously-thought economics of inference, as context window goes up n, cost goes up n^2, so Anthropic almost certainly made an advance here (the other option is that every other company except Anthropic is way-overcharging)
2) If inference cost reductions (eg quantization) and inference capability measures (eg bigger context windows) improve, that makes training larger models make more economic sense.
I'm curious on your thoughts w/ regards to GPU's vs specialized hardware especially FPGAs. I'm not necessarily claiming that FPGAs will be a driver of cutting edge AI development, but I do wonder if they have a significant role to play. Microsoft for example has an interesting project:
https://www.microsoft.com/en-us/research/project/project-catapult/
Reading the article again, I’m left unconvinced, mainly because I think it fails to argue convincingly for any actual hard limit to ML progress along the lines it discusses.
Even if the current semiconductor technology turned out to be the final limit of what’s possible, there would still be plenty of room for making the same technology cheaper and more widely available. And even if programming models remained difficult and demanding special skills, the key players can surely find enough smart people capable of grappling with them to drive further progress.
Indeed, even if nobody managed to design a better chip than the A100 ever again, I’m sure that in not very many years we’d nevertheless end up with A100s (or equivalents) available for $1000, then $100, then probably $10 or even $1, instead of the current $10K or whatever. The astronomical sums needed to train the current state of the art models will inevitably fall by several orders of magnitude, even if technology makes no fundamental progress and we only see more of the same kinds of fabs but in a larger and more competitive market. It’s the same for not just chips, but also all other technologies needed to connect them on a large scale.
And while people skilled at parallel programming and intricate low-level optimization are rare, they’re not *that* rare. It’s far beyond the skills of an average programmer, but the big players are definitely able to hire teams of people who can deliver hand-optimized large-scale parallel applications on demand quickly.
So in the end, any argument against short AGI timelines would still need to hold in a hypothetical world where vast clusters of A100s can be obtained readily for couch change, and every big player has expert teams readily to write and hand-optimize applications for them on short order. Because that is indeed the world we’re likely to be in fairly soon, certainly much earlier than the 2040s.
You telling me you write 100x the code you did before?
How much code were you writing before? Did you take a year to ship a simple feature? Do you do 10 a day now?
Shocking to me that people cannot conceptualize what 100x means. It's a big reason the doomer takes are popular.
You made a bunch of useless noise and then claimed a completely ridiculous number. And then refused to answer some very simple questions.
Once again:
Did you take a year to ship a simple feature? Do you do 10 a day now?
And then your reply is even more full of noise. Completely detached from reality, only focused on how you feel. It doesn't matter if you feel "100x". Maybe heroin is the real AGI that you're looking for.