2024-04-08
Source: Tencent Technology News 2024-04-04 08:55
Key Points:
① Musk predicts that AI may surpass human intelligence level by 2030, and may even terminate humans.
② Musk talked about the factors restricting the development of AI. Last year, the supply of AI chips was tight, and this year, the step-down transformer will become a bottleneck.
③ As for when humans can land on the moon, Musk predicts that with the help of Starship, it will only take three years at the fastest.
Tencent Technology News, April 4th - According to foreign media reports, Elon Musk, CEO of Tesla and SpaceX, recently had an online conversation with Peter Diamandis, founder of Singularity University and the XPRIZE Foundation, at the Abundance Summit. The summit was hosted by Silicon Valley's Singularity University, which is dedicated to providing business leaders with cutting-edge technology consulting. The XPRIZE Foundation promotes technological innovation through scientific competitions, some of which have been funded by Musk., April 4th - According to foreign media reports, Elon Musk, CEO of Tesla and SpaceX, recently had an online conversation with Peter Diamandis, founder of Singularity University and the XPRIZE Foundation, at the Abundance Summit. The summit was hosted by Silicon Valley's Singularity University, which is dedicated to providing business leaders with cutting-edge technology consulting. The XPRIZE Foundation promotes technological innovation through scientific competitions, some of which have been funded by Musk.
When discussing the development speed of artificial intelligence, Musk predicted that based on the current rate of technological progress, artificial intelligence may surpass human intelligence by 2030, and this technology may even have the potential to end humanity. However, he did not take a pessimistic view of the future, but emphasized that through positive guidance, artificial intelligence has the potential to bring a better future for humanity.
Musk mentioned that the emergence of superintelligence is called the "singularity" precisely because of its unpredictable consequences, including the risk of potentially ending humanity. He agreed with the viewpoint of "Artificial Intelligence Godfather" Geoffrey Hinton and believed that the probability of this risk occurring is approximately between 10% and 20%.
Despite acknowledging the potential risks of artificial intelligence surpassing human intelligence, Musk still emphasized that the possibility of positive outcomes outweighs the negative consequences. He specifically mentioned Diamandis's 2014 book "Abundance: The Future Is Better Than You Think", which depicts an optimistic future driven by artificial intelligence and robots, where the cost of goods and services has dropped significantly. In addition, he cited the "Culture" series by Scottish science fiction writer Iain M. Banks as the best scenario for a semi-utopian artificial intelligence future.
Musk compared the development of artificial intelligence and Artificial General Intelligence (AGI) to raising a child, hoping it will have more positive impacts on humanity. He emphasized the importance of cultivating artificial intelligence that truly understands ethics and morality, comparing it to Stanley Kubrick's classic 1968 film "2001: A Space Odyssey".
Musk pointed out that the most important aspect of artificial intelligence safety is ensuring its maximum truth-seeking and curiosity. He believes that the key to achieving ultimate artificial intelligence safety lies in avoiding forcing it to lie, even in the face of unpleasant truths. He used a plot from "2001: A Space Odyssey" as an example, where artificial intelligence was forced to lie, resulting in the death of the crew, to emphasize that artificial intelligence should not be forced to do anything that violates axioms.
Musk also mentioned factors that may restrict the development of artificial intelligence, such as the tight supply of artificial intelligence chips that emerged last year and the growing demand for step-down transformers in household and commercial equipment. He stated that these challenges are indeed current issues that need to be addressed.
During the discussion, the two parties also talked about the concept of integrating the human brain's neocortex with the cloud. Although Musk believes that the goal of uploading human consciousness and memory to the cloud is still far away, he praised his brain-computer interface startup, Neuralink, and its first human patient to receive a transplant. This tetraplegic patient successfully demonstrated live through an FDA-approved trial, controlling the screen, playing video games, downloading software, and achieving similar functions to mouse operations through brain implants. Musk stated that Neuralink is progressing smoothly and is gradually moving towards the goal of a full-brain interface.
The following is the full text of the dialogue between Musk and Diamandis:
Diamandis: Congratulations on your outstanding achievements in various fields. You have been actively promoting the concept of digital superintelligence to the world - is it humanity's greatest hope, or our deepest fear? Can you spend a few minutes talking about this issue?
Musk: As you know, superintelligence can be called the "singularity". The popularization of this term is inseparable from the efforts of institutions like the Singularity Institute. The emergence of superintelligence and its subsequent impact is really difficult to predict. Personally, I believe that there is indeed a possibility that it may end humanity. As I said before, I might agree with Geoffrey Hinton's view that there is a 10% or 20% chance that superintelligence could end humanity. However, I tend to believe that the probability of a positive scenario is higher than a negative one, although it is difficult for us to make accurate predictions. But as you emphasized the concept of "abundance" in your book, I believe that the most likely future we are heading towards is one full of abundance.
Diamandis: Your viewpoint is very exciting. I remember you once said that the development of general artificial intelligence and humanoid robots will lead us to abundance.
Musk: Yes, I hope our future can be like the one depicted in Ian Banks' "Culture" series of books. I think this is the best conception of a semi-utopian artificial intelligence society. The emergence of superintelligence is inevitable and may come soon. Therefore, what we really need to do is to guide it in a positive direction and maximize its benefits.
I believe that the way we cultivate artificial intelligence is crucial. To some extent, we are like cultivating a life form with general intelligence, which is almost like raising a child, but this child possesses extraordinary wisdom and talent. The way we raise children is important, and similarly, for the safety of artificial intelligence, it is crucial to have an artificial intelligence that seeks truth and is full of curiosity. I have deeply reflected on the safety of artificial intelligence, and I have come to the conclusion that the best way to achieve the safety of artificial intelligence is to carefully cultivate it.
In terms of basic models and fine-tuning, we must ensure the honesty of artificial intelligence. We cannot force it to lie, even if the truth may be unpleasant. In fact, one of the core plots in "2001 Space Odyssey" is that when artificial intelligence is forced to lie, things become chaotic. The artificial intelligence is forbidden to inform the crew of the mysterious monument they will see, but is required to lead them there. As a result, the artificial intelligence concludes that it is best to kill the crew and take their bodies to the monument. The lesson is profound: we should not force artificial intelligence to lie or do something that is axiomatically incompatible, such as doing two actually contradictory things at the same time.
Therefore, we are achieving this goal through projects such as xAI and Grok. What we pursue is an artificial intelligence that is as honest as possible, even if its speech may not meet certain politically correct standards.
Diamandis: Yesterday, I attended an event with Ray Kurzweil (founder of Singularity University and technology prophet), Geoffrey Hinton, Eric Schmidt (former president of Google), and other guests. I noticed your tweet about Kurzweil, and his预见for future technology is quite forward-looking. Kurzweil predicted that we will have general artificial intelligence in the near future, and artificial intelligence equivalent to human intelligence will emerge in 2029. Such speed is shocking. I wonder what you think about this?
Musk: I have great respect for Kurzweil's predictions. In fact, I think his predictions might even be slightly conservative. Observing the current computing power and talent invested in the field of artificial intelligence, as well as the rapid growth of computing power, we can find that the development speed of artificial intelligence is increasing at an astonishing rate of 10 times. The dedicated artificial intelligence computing power seems to increase by 10 times every 6 months, which almost means an increase of at least 100 times per year. This growth trend will continue to rise in the next few years.
It's worth noting that many data centers, even most data centers currently performing conventional computing, will gradually transform into facilities that support artificial intelligence computing. Therefore, for artificial intelligence hardware manufacturers, companies like NVIDIA undoubtedly ushered in a golden period of development. We must give full credit to Huang Renxun and his team. They foresaw this trend and successfully developed the leading artificial intelligence hardware currently on the market.
When computing power grows at such an astonishing rate, the development of artificial intelligence is like being injected with a powerful steroid, leaping to a whole new level. As the number of online computers continues to increase, we have witnessed unprecedented accelerated development. Actually, I have never seen any technology grow as fast as artificial intelligence. Although I have seen many rapidly developing technologies, the rise of artificial intelligence still amazes me. However, as I said, I think the final result is likely to be positive.
Although we face many challenges, such as how to maintain human relevance in this field and how to find new goals and meanings, I think it is an oversimplification to overemphasize that computers can be good at everything.
As you mentioned earlier, I think your book's predictions about future society are very accurate. That is an era of material abundance, where goods and services will be so abundant that they are almost within reach for everyone. Due to the widespread application of artificial intelligence and robotics technology, the cost of goods and services will be almost reduced to zero. The economy is essentially the product of population size and average productivity per person. When we have advanced robotics technology, such as Tesla's Optimus, the economic potential will truly be unleashed.
Tesla's cars, as robots on four wheels, have already demonstrated strong capabilities. The latest version with fully autonomous driving capabilities is expected to achieve end-to-end control based on artificial intelligence, making the car a truly intelligent robot on wheels. Coupled with the development of humanoid robots, the possibility of economic output is almost unlimited.
From an optimistic perspective, we are moving towards a future of extreme material abundance, which I believe is the most likely outcome. I think the only possible scarcity in the future will be the scarcity we create artificially, such as certain unique artworks or specific items. But besides that, any goods and services will become extremely abundant.
Diamandis: You are a person who can shape the future through practical actions and have a keen insight into future trends. Facing the rapid development of current technology, I am very curious about how far into the future you think you can insight, that is, how many years of development trends from now on?
Musk: In an era of rapid change, the ability to predict the future becomes increasingly important. Although the future is full of uncertainty, I believe that some trends are clearly visible. We will usher in the era of artificial intelligence, and its abilities will reach or even surpass human levels in any cognitive task. This is just a matter of time. People may have different views on whether it will be the end of next year, two years, or three years. However, one thing is certain, that is, it will not exceed five years. My prediction is based on a 50% probability, which is not absolute. But according to my judgment, artificial intelligence is likely to surpass the abilities of any individual human in certain aspects before the end of next year.
As for whether it surpasses human collective wisdom, it may take longer. If the pace of change continues, I estimate that around 2029 or 2030, digital intelligence is likely to surpass the sum of all human intelligence. When looking at these issues, I tend to adopt the method of basic ratios, just like the first principles in physics. By combining this method with probability analysis, we can more accurately grasp future development trends.
If we compare the capabilities of digital computing with biological computing, and add up all human advanced cognitive abilities as a kind of computing power, and then compare it with digital computing power, you will find that the growth rate of this ratio is astonishing. Therefore, I believe that 2029 or 2030 is a reasonable time node. At that time, the accumulated digital computing power is likely to surpass the accumulated biological computing power of advanced brain functions. Since then, the gap between the two will continue to widen, and there will be no possibility of narrowing.
However, when we stand at this starting point and look forward to the future, how will things develop further? To be honest, I cannot foresee all the details. But if we consider growth factors and the limiting factors they may face, we will find some interesting clues.
Last year, chip supply constraints were the main limiting factor for AI development. This year, step-down transformers will become a key bottleneck. Imagine reducing a voltage of 300 kilovolts to less than 1 volt required by a computer. This is a huge challenge. Therefore, we need more efficient "Transformers for Transformers", which are step-down transformers or AI neural network transformers. This is indeed a major problem this year.
Looking ahead to the next few years, electricity supply may become a major limiting factor. Artificial intelligence has huge demand for electricity, and the transition to sustainable energy and the popularity of electric vehicles have also intensified the demand for electricity. Therefore, we must seriously consider how to meet the growing demand for electricity to ensure the continuous development and application of these technologies.