Attention may be all we need, but are Transformers the best Architecture for Attention
Liquid.ai's synaptic modeled NN are much more efficient than standard NN at learning things, and a model closer to biology. The following video by their founder is a great intuitive introduction to how NNs and consequently LLMs work, as well as showing how much more efficient there are architecture is. As an OG in tech, I can tell you that fist out the gate and initial dominance is rarely the long range winner in tech. This work of Liquid.ai's is very interesting.
3
2 comments
Anaxareian Aia
7
Attention may be all we need, but are Transformers the best Architecture for Attention
Data Alchemy
skool.com/data-alchemy
Your Community to Master the Fundamentals of Working with Data and AI — by Datalumina®
Leaderboard (30-day)
powered by