Meta's Llama 3.1-405B version data has been leaked, with the upcoming model's performance surpassing that of GPT-4o, but the inference cost has tripled and the encoding performance is poor. Individual developers may not be able to afford such a large parameter model, making it suitable for enterprises and government public sectors. The model has already leaked and can be downloaded quickly, but it cannot run on general GPUs. Some netizens have a negative attitude towards the models released by Meta, believing that the cost-effectiveness and functionality are not worth looking forward to. Furthermore, the leak originated from Microsoft's Azure Github. This model has high computational requirements and is not as cost-effective as GPT-4o mini