Service Hotline: 13823761625

News

Contact Us

You are here:Home >> News >> Industry News

Industry News

Up 37, NVIDIA GPU continues to experience shortages and price increases

Time:2023-05-18 Views:904
Source: Mantianxin Author: Vert
    As the AI craze of ChatGPT continues to heat up, the number of companies in the industry laying out large language models is gradually increasing, and the demand for computing power is also significantly increasing. At present, NVIDIA GPU prices are not only rising, but delivery cycles are also being extended, and some new orders may not be delivered until December.
A100 price increase by 37.5%, A800 price increase by 20%
    According to Jiweiwang, an agent revealed that the price of Nvidia A100 has been increasing since December last year, and as of the first half of April this year, the cumulative price increase in the past five months has reached 37.5%; During the same period, the cumulative price increase of A800 reached 20.0%. At the same time, the delivery cycle of NVIDIA GPUs has also been extended. Previously, the delivery cycle was about one month, but now it usually takes three months or more. Even some new orders may not be delivered until December.
    In the face of the huge gap between supply and demand, many customers have also had to "tighten their belts" in this situation. According to an article on Jiweiwang, there are approximately 40000 to 50000 A100 available for training AI large models in China, which is quite tight in supply. Some cloud service providers have strictly restricted the internal use of these advanced chips to retain them for tasks that require strong computing.
    It is worth mentioning that similar incidents have already occurred within Microsoft. In March, foreign media reported that Microsoft was facing a shortage of AI server hardware and had to adopt a "quota supply" mechanism for AI hardware resources, limiting the resource usage of other AI tool development teams within the company.
The Story of NVIDIA GPU Price Increase
    Looking at the timeline, some GPU industry insiders have pointed out that in fact, since June last year, Nvidia has announced an increase in A100 prices, which is about 20%, and channel merchants are intensifying their hoarding. As for the Nvidia A800, there was a price increase before the ChatGPT boom, but it reflects a certain lag in the market. Catching up with the ChatGPT boom at this time point further amplifies this phenomenon.
    GPU industry insiders say: "The price increase of NVIDIA GPUs is somewhat related to ChatGPT, and the delivery cycle is also affected, resulting in numerous card speculation behaviors in the market
    In addition, on overseas e-commerce platforms, the price of NVIDIA‘s new flagship GPU H100 had already risen to over $40000 in mid April.
    Nowadays, various technology giants are competing to launch their own big models, and the demand for GPUs continues to rise. OpenAI pointed out that in order for the AI big model to continue to achieve breakthroughs, the required computing resources need to double every 3-4 months, and funds also need to be matched through exponential growth.
    Industry insiders have also stated that one of the training costs of AI big models is basic computing power. If prices continue to rise, whether it is the internet, AI giants, or start-up upstarts, the additional investment required may be far greater than at the beginning.
GPU builds a large model computing power threshold
    As is well known, chip capability directly affects the effectiveness and speed of high computing power training. Compared to general computing power based on CPU chips, AI requires intelligent computing power mainly based on the computing power provided by AI chips such as GPU, GPGPU, and AISC, used for artificial intelligence training and reasoning. Among them, GPU is a hardware device specifically designed to handle high-performance computing such as graphics, videos, games, etc. It stands out in terms of computing power compared to other hardware. At the same time, with the release of Nvidia A100, H100 and other models of products, the throughput of the previous product in AI inference is 249 times that of the CPU. GPU has become the core hardware of current AI computing power.
    According to a research report by CICC, strengthening the interconnection capability of multiple GPUs can improve parallel computing capabilities, thus increasing the demand for the number of GPUs for computing power improvement. As the computing power of a single GPU becomes increasingly difficult to meet the demand for computing power in deep learning, Nvidia has started using multiple GPUs to solve problems. In this regard, industry analysis suggests that the number of high-end GPUs basically determines how much model a manufacturer can practice, and will later become an important indicator of the industry‘s ability to judge a company‘s large model capabilities.
    According to TrendForce data, if calculated based on the processing power of NVIDIA A100 graphics cards, the GPT-3.5 model requires 20000 GPUs to process training data. There is also a widely recognized view in the industry that the computing power threshold for doing a good AI big model is 10000 A100 chips.
    At present, the global GPU market is mainly monopolized by Nvidia, Intel, and AMD, with independent GPU shares of 85%, 6%, and 9% in Q4 last year. Among them, artificial intelligence, cloud computing, and independent GPUs are mainly led by NVIDIA, with the highest floating point computing power of A100 and H100 achieving 19.5 TFLOPS and 67 TFLOPS, respectively.
 












   
      
      
   
   


    Disclaimer: This article is transferred from other platforms and does not represent the views and positions of this site. If there is any infringement or objection, please contact us to delete it. thank you!
    矽源特科技ChipSourceTek