I have a confession to make. I’ve broken my own rule.
When I set out to build and program DANI, one of my core principles was to ensure he learns primarily from experience. The goal has always been emergent behavior through his neural architecture, with as little "hard-coded" logic as possible. But as I’ve delved deeper into the complexities of human interaction, I’ve found one area where I feel a departure is justified: Natural Language Processing (NLP).
The LLM Dilemma
![]() |
| The LLM Sledgehammer |
First, there’s the cold, hard reality of the budget. I’ve managed to keep the total cost of DANI—parts, boards, and all—under £500. Adding a high-spec NPU or a secondary board capable of running something like Llama or Gemma would have blown that goal out of the water.
Then there’s the physical engineering. Space is at a premium inside DANI’s chassis. While I probably could have squeezed another board in there, I’m increasingly conscious of airflow. The last thing I want is for DANI’s "brain" to thermal throttle in the middle of a conversation.
But most importantly—and this was the dealbreaker—is the issue of personality. If I use a pre-trained model like Qwen or Llama, I’m essentially importing someone else’s bias and conversational style. These models are fine-tuned to be helpful assistants; I want DANI to be DANI. Using a massive, multi-billion parameter model just to parse a "hello" felt like using a sledgehammer to crack a hazelnut.
The Go-pher’s Path to Understanding
Instead of the LLM route, I’ve built a custom natural language module using standard NLP libraries for Go. It’s lightweight, it fits within our existing hardware constraints, and it gives me the control I need.
I’ve added two critical features that an off-the-shelf LLM wouldn't handle the way I want:
- Sentiment Assessment: By using established sentiment modules, DANI can now perceive whether he is being praised or scolded. This feeds directly into his hormone levels. If I’m happy with his performance and tell him he's done a good job, his "positive" hormones will rise, reinforcing that behaviour in his training.
- Simplified Context Awareness: I wanted conversations to feel natural. If I ask DANI, "Where is he?", he needs to understand that "he" refers to the last male person we discussed. Similarly, "Go there" should resolve "there" to the last location mentioned. This kind of stateful awareness is vital for a robot that exists in a physical space.
Commands, Questions, and the "Ignore" Factor
The module is now capable of distinguishing between commands, questions, and general statements. Each triggers a different internal processing path, but here is the kicker: everything is still influenced by his hormones.Before DANI responds, every decision passes through his LSTM (Long Short-Term Memory) core. Because that LSTM is also being fed the current state of his effective hormones, there is no guarantee he will do what he's told. If he’s in a "bad mood" or his hormone levels are skewed by previous interactions, he might just decide to ignore me entirely.
It’s a bit of a gamble, breaking the "no-code" rule to build this framework, but I think it’s the only way to give DANI a voice that is truly his own. We’ll just have to wait and see if he actually listens to me.


No comments:
Post a Comment