Pixels Are All You Need
Software is deprecated. All of it (sorry, vibe coders), and the entire internet along with it. Logic and user experiences are slowly collapsing into AI models, which today stream text, images, videos and the early hints of app clips backed by software.
In the future, AI models will just continuously stream pixels; by doing so, they'll also stream on-the-fly "software" directly to the user, but with no software actually required.
More than 9 years ago at Viv we built dynamically embedded app-like experiences (e.g. on-the-fly e-commerce store for buying flowers, or map-based ride-hailing app) which our conversational agent orchestrated to help a user with their query or intent. Fast forward, AI labs are finally exploring similar mechanics to enrich what amounts to a blank canvas for every new conversation or request a user has.
The problem is: you won't need any of those embedded apps or traditional software (other than for training data). Software has always been an inefficient bridge forcing a one-size-fits-all experience onto the masses. It's a lot of chrome for a little value.
Human Data as Code
It's perhaps apparent, but worth emphasizing that training data fed into LLMs is sort of code in the same sense of how software is backed by code. Better data, better models. Between pre- and post-training, these data imbue capabilities, personality, even reasoning to some degree. Current chatbots feel more like a latent space search retrieval type solution, but if we explore directionally where this is headed, it's starting to look more like all logic and code that typically gets encoded into software will one day be directly encoded into AI models. Software won't be necessary anymore — the AI model itself will be the software.
We're starting to see this in medicine where AI can help orchestrate and offload certain rote tasks that enough human clinicians have trained it on. By training on doctor visit notes or medical research papers, AI can now generate those (at least for the types of situations it's seen, but interestingly in a vast, combinatorial way like the best diagnosticians do).
At what point will we stop writing software for students, clinicians, bankers, scientists or engineers and collapse the entirety of software into the AI models themselves? And what will product development look like when we inevitably do?
One thing is sure: the internet itself won't be required anymore. It'll all be dead code.
Software as Pixels
What would your user experience look/feel like if you could just have the things you want without sifting through software to get it?
Language models have helped us fork off many experiences from one entry point, but you can only read a wall of text, look at a static image, or watch a video a certain number of times before you hit a dead end. You need more, and software offers that stop gap today with the cost being rigid UIs and logic written by developers many months, years or decades ago.
But all you need is pixels. With pixels, you get text, and images, and videos, and... software. You can create any user experience imaginable when you directly stream pixels into someone's eyes (and one day with human<>brain interfaces, we won't even need pixels).
World models are starting to explore completely generative experiences in the form of games that evolve in "infinite" (within their latent space) ways, but carry that thought forward and it's easy to apply the concept of generative experiences to the entirety of software as well, without any code required. Text, images, videos, interactive interfaces... all manifested in real-time as pixels and backed by what AI is already great at: data retrieval & synthesis.
A Blank Canvas
We've spent decades writing software, but largely we've plateaued around experiences because of the inherent inertia involved with writing and maintaining them. Creating software used to be a moat, but now it's a liability.
A lot of software never should have existed to begin with; it just exists because there's too much inertia to change. Electronic health records (EHRs) are a great example of this.
In the future, you can imagine we can optimize experiences much more fluidly if we just understand what the intended outcomes need to be. In medicine, doctors are forced to write these long, obtuse notes called SOAP notes in the hopes that it will lead to better patient outcomes (in reality, it's meant for better billing). Those notes then get forced into an EHR and orchestrated by a team to order labs, perform procedures, etc. Those EHRs have grown so complex to support all of these needs that it often feels impossible to change away from them, to the tune of large health systems paying millions in annual software fees.
The problem is, it's not clear that writing those notes and storing them in EHR software is actually the best way of ensuring great care, or great task orchestration, record keeping, medical billing, compliance, etc.
A lot of software is just a forced tradition. We sift through the muck of it with the same stoicism of a cow standing in the rain. It's not actually helpful, and it's certainly not optimal.
If you could burn it all down and start anew, what would the ideal user experience look like for every need?
The future of software looks like no software at all. The future of user experience? Every experience imaginable, manifested through pixels alone.
-- Rob