Replicating the chatbot is a monumental undertaking since OpenAI has not made the code for ChatGPT open-sourced, and even big-tech companies are having difficulty. Nevertheless, AI firm Colossal-AI has discovered a method for creating your own ChatGPT with less computer power.
The company has used a PyTorch-based approach to achieve this aim, which includes pre-training, reward model training, and reinforcement learning. With 10.3x growth on one GPU model capacity, they provide a sample version of the training procedure that uses just 1.62 GB of GPU memory and can be completed on a single consumer-grade GPU.
According to Colossal-AI, a single-machine process maybe 7.7 times quicker than the original PyTorch, and a single-GPU inference can be 1.42 times faster, which is possible with just one line of code. Users may improve the model's capacity for fine-tuning by up to 3.7 times with just one line of code executing quickly on a single GPU.
An A100 80GB model with 780 million parameters is normally needed for the original PyTorch implementation, which costs US$14,999. On the other hand, Colossal-AI multiplies it by 10.3 to reach 8 billion parameters on a single GPU.
A single-GPU scale, a multiple-GPU scale on a single node, and a 175-billion parameter scale are all accessible in various configurations. Furthermore, those available from Hugging Face are pre-trained language models for OPT, GPT-3, and BLOOM.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.