It’s been a fun March. A bunch of friends and family have installed the app and tried things out, their feedback has been helpful even when they didn’t realize they were giving feedback. The questions they ask on how to use something or their ideas for new features gives insights on how people that aren’t me use the app, or help validate some of the ideas I’ve had. If you’re reading this, thanks!
On the marketing side I’ve been prepping some branding tweaks to fit more with how I want to position the company. The original idea (and name) for Sovernus was “personal sovereignty through modern technology”: helping people use technology to become more self-reliant. For a while I shifted into a home maintenance branding because I felt like that’d serve as a beachhead with a wider audience, but after talking to people I’ve learned that people probably aren’t going to pay for an app that just provides some guides and reminders for home maintenance tasks, and that they are more excited by my original idea. Home maintenance is still a part of that, but I’m not going to build a brand specifically around that when DIYers and homeowners have a broader set of interests and don’t want to install several apps to take care of them.
On the product and technology side, people using the app naturally turned up some bugs and quirks, so I’ve spent some time patching those up. Making tweaks and clarifications, and pondering solutions to their issues.
A comment on developing software with AI: while with vibe coding you can whip stuff up quickly and test new ideas, so can everyone else. It’s no longer sufficient to just build something basic; you need to either be a better marketer or a better engineer. Since marketing is not my strong point, I want to compete on building a better product in ways that AI can’t do very well.
This leads to the R&D that I’ve been working on. I’ve been playing around a lot with combining augmented reality (AR) and fast AI inference, trying to be able to use the phone camera to understand what’s going on in the real world so that the app can interact with the human in near-real-time while they do their tasks. Normal LLMs don’t work very well here because they are slow, they take several seconds to perform inference and require a solid network connection that doesn’t necessarily exist in the places where people are using the app (garages, gardens). I’m trying to see how much mileage I can get out of advances in non-LLM AI that’s able to operate in the tens-to-hundreds of milliseconds range using just the phone’s hardware.
Anyway that’s all for now, thanks for reading! I’m happy to chat in more detail about any of these things, don’t hesitate to reach out.