In recent weeks, I've been somewhat vague about my AI and coding explorations. It's time to sharpen the focus and delve into the specifics of my AI assistant project and the research areas I'm most keen to explore.
Let me be clear: I'm not trying to reinvent the wheel. Where it makes sense, I'll leverage existing open-source and readily available software. Why build a language model from scratch when there are perfectly good ones already out there?
My core goal is to build an AI assistant, embodied in a robot head (and potentially a mobile platform), capable of experiencing the world and learning from those interactions.
While that might sound like standard fare in today's AI landscape, I aim to integrate some less common and, I believe, crucial features:
- Self-reflection: The ability for the AI to revisit past experiences with the benefit of hindsight, analysing its previous choices to determine if it would make the same decision again. Imagine the potential for growth if an AI could learn from its "mistakes" in a truly iterative way!
- Reinforced Memory Prioritization: A large-capacity memory system that prioritizes reinforced memories, similar to how our own memories function. This would allow the AI to focus on and retain the most relevant and impactful information.
- Emotional Awareness: This is a significant challenge. I want the AI to learn from experiences that evoke "good" and "bad" responses. Human feelings are complex, influenced by chemical reactions and endorphins. My AI won't have these biological processes, so I'll need to simulate them and, crucially, understand why they are needed and what effect they would have on the AI's cognition and decision-making.
- A Conscience: I want the AI to be capable of second-guessing its choices based on a defined set of ethical considerations, perhaps even drawing inspiration from Asimov's Laws of Robotics. As I've discussed previously, this is a complex but vital area of exploration.
- Dreams: Finally, I want to explore AI dreams. While there's existing research in this area, I believe I've identified a novel approach that could enable the AI to dream, with those dreams having a tangible impact on its cognition.
This is undoubtedly a substantial undertaking for a single individual, and it might even exceed my current capabilities. But I'm committed to pursuing it. This project will demand extensive research into AI, the human mind, and the ethical implications of creating such a system.
Unlike some AI development approaches that rely on a single, powerful computer, I'm taking a distributed route.
Another key requirement is to minimize costs. To achieve this, each functional area (or "lobe") of the AI's "brain" will be housed on its own single-board computer. These will be interconnected, exchanging information as needed. This is similar in concept to the Robot Operating System (ROS), but I aim for greater speed and efficiency. I plan to use different boards, each selected for its strengths in specific tasks. For example, a board with a Kendryte K210 will handle vision processing, an Arduino Mega will manage motor control (yes, I know it's not an SBC, but it serves the purpose), and a Raspberry Pi will be used for memory management.
The AI will also utilize Large Language Models (LLMs), likely at least three, for tasks such as understanding speech, processing input, and producing output. However, unlike many systems that employ a single LLM, these will be distinct entities within the AI's architecture.
Memory management will involve a "scoring" system to prioritize important information for short-term caching, while less critical memories will reside in long-term storage. To prevent storage overload, I'll also implement a memory decay system that will gradually remove memories that become irrelevant to the AI's ongoing operation.
I'm aiming to keep the total project cost under $2,000. Whether that's achievable remains to be seen, but it's a target I'm striving for.
![]() |
Do androids dream of electric sheep? |
Oh, and the dreams? You'll have to wait a while before I reveal the details of their implementation. Suffice it to say that, like humans, my AI will have to sleep, and this will be a non-negotiable requirement.
So, what's my ultimate goal? It's to create something new, something that hasn't been done before. Not necessarily the individual components, as I've stated, I'll be using pre-existing software where possible (such as Gemma or Llama 2). My ambition is to synthesize everything in a novel way, exploring how this approach could not only advance AI research but also contribute to making AI safer for the general public.
And on that note, I can finally reveal the name I've given this project, and the meaning behind it: D.A.N.I. stands for Dreaming AI Neural Integration. This encapsulates the core of my research: to explore the potential of AI that learns and grows through a process akin to dreaming, deeply integrated within a neural network structure.
My wife jokes that I'm planning to build Skynet, but my intention is precisely the opposite. I'm designing a system that would be inherently incapable of becoming Skynet – think more C3PO than Terminator.
By incorporating the ability to learn from its mistakes (and successes), as well as the capacity for dreaming, I hope to enable the AI to accelerate its learning process. We, as humans, frequently revisit our decisions, so why not equip an AI to do the same?
I also aim to provide you with an engaging narrative of my development journey. I anticipate making many mistakes. But that's part of the learning process – discovering not only how to do things, but also how not to do them.
If you have any comments or questions, please leave a comment below. 👇