Activity
Mon
Wed
Fri
Sun
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
What is this?
Less
More

Memberships

AI Developers' Club

Public โ€ข 25 โ€ข Free

8 contributions to AI Developers' Club
What is Swarms AI Framework?
For those who are new, the Swarms AI Framework is a powerful tool designed to orchestrate multiple AI agents seamlessly. Think of it as a conductor leading an orchestra, where each AI agent plays a specific role to create a harmonious and efficient outcome. Here's a simplified breakdown: 1. Multi-Agent Coordination: Just like a team, different AI agents collaborate, each handling specific tasks such as data processing, decision-making, or customer interaction. 2. Scalability: The framework is built to scale, meaning it can handle increasing workloads without compromising performance. 3. Flexibility: It can be customized to fit various applications across industriesโ€”from healthcare to finance, and from customer service to smart cities. 4. Efficiency: By automating routine tasks and providing intelligent insights, it helps businesses save time and reduce operational costs. 5. Security: With advanced encryption and blockchain technologies, it ensures data integrity and security, making it reliable for sensitive applications. Why We Use Swarms AI: - Enhanced Collaboration: It enables seamless collaboration between different departments by providing real-time data and insights. - Improved Customer Experience: Through personalized interactions and proactive support, it elevates the customer service experience. - Operational Excellence: By streamlining workflows and automating mundane tasks, it allows us to focus on strategic initiatives. - Innovative Solutions: It keeps us at the forefront of technological advancements, helping us stay competitive in a rapidly evolving market. We believe that by working together and leveraging the capabilities of Swarms AI, we can achieve remarkable results. Let's make the most of this opportunity and drive our business forward with innovation and excellence. Thank you for being part of this exciting journey. Let's get started! ๐Ÿš€ Feel free to ask any questions or share your thoughts. We're here to support each other and make the best out of this experience.
1
2
New comment Jul 13
0 likes โ€ข Jul 12
Thanks for sharing nicholas! I'd like to I'd like to know what model or models the system uses, if it can be can be run locally and whether or not it is open source.
Awareness is All You Need...
I propose the hypothesis that the foundation for creating a cognitive system that can think and act autonomously is to develop and maintain an "Awareness" of the current situation/environment. A model which iteratively processes this awareness and uses it to make executive decisions should (in theory) be able to simulate a rudimentary form of 'consciousness'. Defining what Awareness means in this context and developing a means of generating and updating that Awareness is the first step in the process.
2
25
New comment Jul 9
1 like โ€ข Jul 7
After doing some more testing, as one would expect I'm running into a few issues and inconsistencies, indicating more work is needed. Moreover, I'm struck by how much you can learn about a model by employing more advanced prompting on it. I'm using Llama 3 8B Instruct, and one thing I've discovered is that it likes to think out loud. At first I tried to constrain it to only generating the final answer, then I switched gears to telling it explicitly to express its thinking process but now I'm just giving it instructions and letting it output whatever it wants; as long as the output I need is in there, it will be up to me to extract it. Also, I've been imposing my preference for using the pound sign (#) to indicate specific sections of the prompt and to create the bullets for hierarchical lists, but I've noticed that Llama 3 will natively choose to use asterisks for these purposes. In fact, it seems to be employing markup to some extent, so I think I'll re-tool my prompts to comply with how it likes these things and see if it's more consistent.
1 like โ€ข Jul 9
I re-tooled the prompts to use asterisks as I mentioned and sure enough, it really helped! The responses now are more focused and less wandering, and there's much less tendency to exhibit inconsistent output formats. I decided I would step back a bit from providing the model with a type of analysis to perform and giving it the steps needed and instead ask the model to make those decisions. In this way, it would become truly autonomous, essentially just being told "look at this input and try to get really smart about it". It was easy to generate simple prompts telling it to decide the best type of analysis and to create steps, then I plug those steps in and away we go! It worked really well, except for one small snag I hit. For no obvious reason, at one point instead of performing the next step in the process it insisted on performing step 2 over again. The results clearly already had the data from steps 1 through 5, but instead of going on to step 6, it simply got stuck going back to step 2. Looking at the few shot examples I'd given it (which had always worked fine before) I realized that the steps in those examples were numbered using values likely to occur in the real step list. In this case, it was supposed to perform steps 1 through 7 in the **STEPS** list, while the example steps were numbered 2, 5 and 3, respectively. Notice that the first example is given as step 2, and the model kept repeating step 2... I changed the numbers used for example steps to double digit numbers (not likely to appear in the actual step list) and "POW!" the problem went away! Finally, I decided to compare the results of these multi-step analyses to what I would get just by putting the input through directly; if the complex analysis wasn't better than a simple one-pass answer, the whole exercise would be pointless. What I found was that given the input "Teaching Music Theory to Children", the multi-step analysis output was focused on the inner concepts of the topic while the one-pass answer gave a more practical answer consisting mostly of suggestions for how to go about teaching music theory to kids. The difference was striking. Clearly it's better to use one-pass for responses to the user (more direct and practical) but use multi-step when engaging in inner thought-type rumination. This was my intent; use multi-step to re-assess memories and concepts already stored in order to develop a deeper understanding and create more intelligent content in memory that can be accessed later as needed. Seems to work!
LLMs without MatMul..
On the weekly QA call I mentioned that I had read about work being done to develop an LLM architecture that doesn't rely on massive amounts of matrix multiplication. I only peripherally understand the concept but it certainly seems like groundbreaking work, so I wanted to introduce it here for anyone who's interested. Here is the article I read, if you have access to substack. And here's a link to the paper, if you would like to see the original publication. It would be great to hear other members' takes on this concept...
2
1
New comment Jul 27
Weekly Support/QA Calls!
I want the AI dev club to provide more value to you all, and to help you bring your projects to life. Courses are fine and all, but personalized help can go a long way. Enter, QA calls every Saturday at 2:00 PM, Pacific Time! If you have a project you're building, a question about prompting or training or datagen, or you just want to hang out, this will be a great chance to get ahead! I'm scheduling the first such call for July 6th. It'll likely be a zoom call. I hope to see you there! Let me know what you think of this initiative, too. Also, we're at 18 members after like our first week, which is really awesome! Glad you're all here ๐Ÿ™‚
6
4
New comment Jul 7
1 like โ€ข Jul 1
Saturday at 2 sounds great. I'm in the Pacific zone - bear in mind that we are currently on Daylight Saving Time, so do you mean to use Pacific Standard Time instead of Pacific Daylight Time? Either is good for me - just want to be clear.
Course update: working on supplementary materials
Edit: they're done now! ---- original post ---- Basically the text version of the videos, quick reference, skimming, and visual learning. I've also fixed the black bars on the videos, I think. Please let me know of any problems or post production errors you come across in the course! Edit: they're done now!
4
8
New comment Jul 24
Course update: working on supplementary materials
1 like โ€ข Jun 26
I will put aside some time soon to review your videos. Happy to assist your hard work in any way I can! ๐Ÿ™‚
1 like โ€ข Jun 28
Checked out some of the vids today - this is looking really cool! I'll watch more and see if I can come up with some suggestions. Great work, Evan!
1-8 of 8
Brian Dalton
3
36points to level up
@brian-dalton-7760
AI hobbyist with delusions of grandeur! No formal training or education, just a strong sense that I can't NOT be involved in this stuff...

Active 36d ago
Joined Jun 21, 2024
California
powered by