Hello everyone. Just wanted to share something small.
I read this blog which talked about how Large Language Models (LLMs) have evolved over the past months.
Firstly, we have pre-trained LLMs which give the user output based on the data they've been trained with. A good example is OpenAI's Chat GPT trained on GPT 3.5 and GPT 4.0 (premium).
Now, you can also have access to these LLMs and train them with specific data to perform a specific task. This is how many AI websites are springing up - the take the base LLMs, repeatedly train it with data until it is good enough to produce reasonable output.
The following trend is going to be a break in the AI evolution.
Recently, developers are trying to build AI interfaces in a way such that they produce a key based on user input, index all relevant data, use that data and info from your input to give you an output. It's similar to how CPUs and RAMs work together. When processing data, CPUs retrieve data necessary from the RAM which accesses this data from the HDD/SSD. It then uses this processes user input with any relevant data from the RAM into output. After this the data is discarded from the RAM
This means that LLMs won't have to be trained over and over again for better accuracy and wouldn't consume large amounts of data. This will also greatly boost generative AI capabilities.
Hope what I shared was good enough. Let me know if you need any clarifications.