User Preferences
The holy grail has always been human preference. Why? LLMs unlocked intent, but preference unlocks autonomy.
We'll need an "OAuth into you, the human" to achieve that.
There are very narrow tasks you can remove the human from, but everything else requires subjective approval. That approval requires options, and options take time.
Even after you think you have enough preferences to decide from, you don't. Indecisiveness over restaurant menus proves as much.
By inferring, collecting, and injecting preferences along the way, you can remove the need for many of those approvals and options, but many still remain (e.g. Amazon purchases).
In the future, each of us will have a collective preference store that we maintain and allow 3rd party services to request from, e.g. via OAuth handshake.
Rather than agents feeling like a chicken tapping a screen, they will carefully guide and simplify choice via preference. Where preferences are lacking, they'll do the work to better understand us and help write back to our preference store. We'll need orchestrators for this.
As OpenAI and others like Minion AI work towards agency and the primitives required, they'll no doubt create an in-house solution to preferences. This is what we did at Viv (ex-Siri team), but that was 7 years ago when no one else was working on agents.
In today's reality, we'll need a common (shared) protocol and something more akin to a personal database that no one except us fully owns. We can grant, deny, and retract access but it will be ever-growing, ever-shifting, and self-pruning.
It will grow to understand not only our purchasing decisions, but our inherent biases and proclivities, as well as our reasons for loving others.
When we look back in 50 years, it will not be AGI that we'll be most proud of, but rather our ability to deeply understand our own needs and how to bring them to fruition.
-- Rob