Aiming for Jarvis, Creating D.A.N.I.

Friday, 21 March 2025

Diving Deeper: My Journey to Create a Safer AI

In recent weeks, I've been somewhat vague about my AI and coding explorations. It's time to sharpen the focus and delve into the specifics of my AI assistant project and the research areas I'm most keen to explore.    

Let me be clear: I'm not trying to reinvent the wheel. Where it makes sense, I'll leverage existing open-source and readily available software. Why build a language model from scratch when there are perfectly good ones already out there?    

My core goal is to build an AI assistant, embodied in a robot head (and potentially a mobile platform), capable of experiencing the world and learning from those interactions.    

While that might sound like standard fare in today's AI landscape, I aim to integrate some less common and, I believe, crucial features:    

  • Self-reflection: The ability for the AI to revisit past experiences with the benefit of hindsight, analysing its previous choices to determine if it would make the same decision again.  Imagine the potential for growth if an AI could learn from its "mistakes" in a truly iterative way!    
  • Reinforced Memory Prioritization: A large-capacity memory system that prioritizes reinforced memories, similar to how our own memories function.  This would allow the AI to focus on and retain the most relevant and impactful information.    
  • Emotional Awareness: This is a significant challenge. I want the AI to learn from experiences that evoke "good" and "bad" responses.  Human feelings are complex, influenced by chemical reactions and endorphins.  My AI won't have these biological processes, so I'll need to simulate them and, crucially, understand why they are needed and what effect they would have on the AI's cognition and decision-making.    
  • A Conscience: I want the AI to be capable of second-guessing its choices based on a defined set of ethical considerations, perhaps even drawing inspiration from Asimov's Laws of Robotics.  As I've discussed previously, this is a complex but vital area of exploration.    
  • Dreams: Finally, I want to explore AI dreams. While there's existing research in this area, I believe I've identified a novel approach that could enable the AI to dream, with those dreams having a tangible impact on its cognition.    

This is undoubtedly a substantial undertaking for a single individual, and it might even exceed my current capabilities.  But I'm committed to pursuing it. This project will demand extensive research into AI, the human mind, and the ethical implications of creating such a system.    

Unlike some AI development approaches that rely on a single, powerful computer, I'm taking a distributed route.    

Another key requirement is to minimize costs.  To achieve this, each functional area (or "lobe") of the AI's "brain" will be housed on its own single-board computer.  These will be interconnected, exchanging information as needed.  This is similar in concept to the Robot Operating System (ROS), but I aim for greater speed and efficiency.  I plan to use different boards, each selected for its strengths in specific tasks.  For example, a board with a Kendryte K210 will handle vision processing, an Arduino Mega will manage motor control (yes, I know it's not an SBC, but it serves the purpose), and a Raspberry Pi will be used for memory management.    

The AI will also utilize Large Language Models (LLMs), likely at least three, for tasks such as understanding speech, processing input, and producing output.  However, unlike many systems that employ a single LLM, these will be distinct entities within the AI's architecture.    

Memory management will involve a "scoring" system to prioritize important information for short-term caching, while less critical memories will reside in long-term storage.  To prevent storage overload, I'll also implement a memory decay system that will gradually remove memories that become irrelevant to the AI's ongoing operation.    

I'm aiming to keep the total project cost under $2,000.  Whether that's achievable remains to be seen, but it's a target I'm striving for.    

Dreaming robot
Do androids dream of electric sheep?

Oh, and the dreams?  You'll have to wait a while before I reveal the details of their implementation.  Suffice it to say that, like humans, my AI will have to sleep, and this will be a non-negotiable requirement.    

So, what's my ultimate goal?  It's to create something new, something that hasn't been done before.  Not necessarily the individual components, as I've stated, I'll be using pre-existing software where possible (such as Gemma or Llama 2).  My ambition is to synthesize everything in a novel way, exploring how this approach could not only advance AI research but also contribute to making AI safer for the general public.   

And on that note, I can finally reveal the name I've given this project, and the meaning behind it: D.A.N.I. stands for Dreaming AI Neural Integration. This encapsulates the core of my research: to explore the potential of AI that learns and grows through a process akin to dreaming, deeply integrated within a neural network structure.

My wife jokes that I'm planning to build Skynet, but my intention is precisely the opposite.  I'm designing a system that would be inherently incapable of becoming Skynet – think more C3PO than Terminator.    

By incorporating the ability to learn from its mistakes (and successes), as well as the capacity for dreaming, I hope to enable the AI to accelerate its learning process.  We, as humans, frequently revisit our decisions, so why not equip an AI to do the same?    

I also aim to provide you with an engaging narrative of my development journey.  I anticipate making many mistakes. But that's part of the learning process – discovering not only how to do things, but also how not to do them.    

If you have any comments or questions, please leave a comment below.    👇



Sunday, 16 March 2025

The Three Laws of Robotics: Can They Really Work?

Isaac Asimov

Isaac Asimov, the renowned science fiction author, introduced the world to the "Three Laws of Robotics" in his short story "Runaround," later included in the "I, Robot" collection.  These laws became a cornerstone of his robot series and have since sparked much debate and thought in the fields of robotics and artificial intelligence.    

The Original Three Laws

Asimov's original laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.    
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.    
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.    

These laws, while fictional, have prompted serious discussions about the ethics and safety of AI.  The idea is that if these laws could be successfully implemented, robots and AI would be inherently restricted from making harmful decisions.  This would not only create a safer environment for human-robot interaction but also limit the potential for misuse of robots in areas like the military.    

The Zeroth Law

Later, in "Robot and Empire," R-Daneel Olivaw, a robot character, introduced the Zeroth Law, which takes precedence over the original three:

A robot may not injure humanity or, through inaction, allow humanity to come to harm.    

This addition broadens the scope of protection from individual humans to humanity as a whole.    

The Challenge of Implementation

While these laws provide a great framework, there are significant challenges in putting them into practice.

Conceptual Challenges

One of the primary issues lies in the interpretation of key terms.  For example, what precisely constitutes "injury" or "harm"?  Is it limited to physical harm, or does it encompass emotional, psychological, and intellectual harm as well?    

Consider these scenarios:

If an action could harm one person but inaction would harm another, what should a robot do?    

If someone is about to kill another person, is it justifiable for a robot to intervene with lethal force to prevent it?    

Is it worse to allow a human to suffer a minor physical injury or to cause potentially longer-lasting emotional harm?    

The Second Law also presents difficulties.  If a robot is given an order that appears harmless initially but could lead to harm later, should the robot obey?  How far into the future should a robot or AI be required to predict the consequences of an action?  If a robot is asked to make a knife, should it refuse, knowing its potential for harm?  Should the robot be prohibited from mining the metal required to make the knife?    

As you can see, applying these laws involves navigating a complex web of nuances.    

A Potential Solution: The 'Virtual Conscience'

The challenge then becomes: how do we implement these laws in a meaningful way, especially with advanced AI systems like neural networks that are constantly learning?    

One proposed approach involves a 'virtual conscience'.  This would be a separate neural network designed to act as an independent arbiter, validating the actions of the main AI.  By training these models separately, we could create a system where the AI's decisions are checked by an objective ethical framework.  It might even be possible to fix the ‘conscience’ network after training, preventing the main AI from altering its ethical parameters.    

The Need for Safeguards

As AI and robotics advance at an incredible pace, establishing safeguards is crucial.  We are at a pivotal moment where we can integrate ethical considerations into the very foundation of these technologies.    

However, achieving this is not without its obstacles.  Agreement among robot manufacturers is essential for widespread adoption.  Unlike Asimov's positronic brains, which had the laws hardwired, current robots and AI do not have this built-in restriction.    

Recent announcements, such as the White House's move to remove certain ethical restrictions from AI and robotics research, further complicate the matter.  Additionally, the pursuit of military applications and the rise of non-government entities in AI development pose challenges to enforcing ethical standards.    

Conclusion

Asimov's Three Laws of Robotics, and the subsequent Zeroth Law, provide a valuable starting point for discussions around AI ethics.  While their implementation is complex, the need for ethical guidelines in AI development is undeniable.    

What are your thoughts? Can these laws be effectively implemented?  Will they make a significant difference in the future of AI?    


I look forward to hearing your comments and perspectives.

Saturday, 1 March 2025

Building My AI Fortress: The Digital Clean Room (and Taking it on the Road!)

In the exciting and sometimes unpredictable world of AI development, especially when you're diving into robotics and experimental projects, security and control are absolutely crucial. That's why I'm building a "digital clean room" for my AI project – a secure, isolated environment where it can grow and develop without external interference.


Imagine it as a "network within a network," a private lab within my digital space. To create this, I'm using two trusty ESP32 microcontrollers, those versatile little chips that are perfect for this kind of project.


The Front Door: Controlled Access

The first ESP32 acts as the "gatekeeper." It connects to my home router (or any available Wi-Fi), allowing me to access the clean room from the outside world. However, this access is strictly limited – think of it as a heavily guarded checkpoint. I’m using the WifiManager library for this ESP32, which is a lifesaver. It automatically scans for available networks, allowing me to easily enter credentials. No more hardcoding network details for every location!


The Inner Sanctum: AI's Safe Haven

Air lock
The 'Airlock'
The second ESP32 creates the "inner sanctum," acting as a dedicated access point for my AI and robotics projects. This internal network is completely isolated from the outside internet, a secure bubble where my AI can learn and grow without external influence.


The Airlock: Bridging the Gap

The two ESP32s communicate using UART, a simple serial communication protocol. This is where the magic happens – it's where I can implement strict filtering and control the flow of information, acting as an airlock between the AI and the outside world. I put a small display to show the external IP address, in case I ever need to access the clean room, and status messages.


Why This Elaborate Setup?

  • Security First: While I don't expect hordes of hackers, it's always better to be safe than sorry. This setup adds a robust layer of protection, significantly hindering unauthorized access.
  • AI Purity: Controlled Learning: I want to shield my AI from external influences, especially during its formative stages. It's vital to control the data it learns from, ensuring a solid foundation.
  • Protecting the "Child": Think of it as nurturing a child in a safe environment. I want to carefully curate the AI's initial experiences and training data before it explores the vast and sometimes chaotic internet.
  • Road-Ready AI, Cyberdeck Style!: This airlock setup has an added benefit. It makes my projects truly portable! I can take them on the road for demonstrations. As long as there's a power source and a network, I can connect an additional computer, like my trusty Raspberry Pi400, directly into the internal network. And here's the fun part: I'm planning to add a small TFT screen via a cyberdeck expansion to the Pi400. This will transform it into a self-contained, portable AI workstation, allowing me to interact with the AI's code, monitor its memory, and keep everything running smoothly, no matter where I am.

The Portable Command Centre: My Raspberry Pi400

At the heart of my portable AI lab is my trusty Raspberry Pi400. This isn't just a computer; it's the portable command centre for my AI project. It's the key to interacting with the AI within the secure confines of the internal network.


Here's why the Pi400 is so crucial:

  • Direct Access to the Inner Sanctum: The Pi400 connects directly to the internal network (IN), providing a secure platform for code development, debugging, and monitoring.
  • Raspberry Pi400 with Waveshare 3.5" screen
    and the Adafruit Cyberdeck
    Real-Time Monitoring: I can use the Pi400 to monitor the AI's memory usage, processing activity, and overall performance.
  • Cyberdeck Expansion: To make this truly portable, I'm adding a small TFT screen via a cyberdeck expansion. This transforms the Pi400 into a self-contained, mobile workstation.
  • On-the-Go Development: Whether I'm at a demonstration, a workshop, or simply working from a different location, the Pi400 allows me to access and manage my AI project with ease.
  • The gateway to the airlock: The pi400 will be the main device used to send and receive information through the airlock.
  • With the Pi400 as my portable command centre, I have the flexibility and control I need to develop and showcase my AI projects, no matter where I am.

At a later date, I may replace the Pi400 with a PiZero2 and a custom keyboard, to make things more compact, but for now, the Pi400 is perfect for the job. Plus, it gives me a full size keyboard for a more comfortable experience..

Controlled Growth, Controlled Access

Yes, this approach might temporarily limit the AI's exposure and potentially slow its initial learning curve. However, in this experimental space, I believe caution is key. It's better to guide its development carefully than to risk exposing it to unfiltered data too early.


By creating this digital clean room and airlock, I'm building a secure, portable, and controlled environment where my AI can thrive. And with the cyberdeck expansion, I'm adding a touch of classic hacker spirit to my mobile AI lab. It's a testament to the importance of security, careful nurturing, and adaptability in the exciting world of AI development.