Start at the End
What future are we slouching towards with AI?
When I’m consulting on a data science project, or training others on how to approach a problem as a data scientist, I always give the same advice: Start at the End.
You’ll only be able to reach your destination if you know what that destination is. And beyond that, you also need to think about what comes next after you reach it. Meaning, why were you trying to get there in the first place? What’s the greater goal?
Once you have this in mind, then you can come back to see where you’re starting from. What information do you have now? What are your observations? Your hypotheses?
After you have a clear view of your end point and your starting point, only then should you fill in the middle bits – the path you’ll take to get from A to B. This step comes last because there is almost always more than one pathway available, and selecting the best one requires knowing how it connects up at both ends.
I give this advice because it turns out that it’s really common for people to begin in the middle. Just hop on the road – any road – and start running. To be fair, in some cases, this might be the right approach.
When I was trying to start this Substack, for example, the advice that worked for me was: “Just get started! Whatever you’re creating doesn’t have to be perfect to start, it just needs to exist, and you can always refine it later.” And this is true – for creative endeavors at least. In the case of problem-solving, however, and most definitely in data science methodology design, this middle-first approach can lead to setbacks, delays, false starts, wasted effort, solving the wrong problem, or maybe worst of all, never solving any problem.
So, when I started to think about how I should approach the multi-faceted, layered ‘problem’ of AI, I decided to take my own advice and start at the end.
Ah, but which End?
The problem, as I originally observed it, was that the proliferation of and demand for AI had significantly impacted my job prospects as a long-time freelance data scientist. Over the last year or so, the clear trend has been a sharp increase in demand for the application of AI tools for solving business problems, along with a simultaneous decrease in demand for the application of data science techniques for the same use cases. It was obvious that in order to stay relevant (and employed) I would need to get serious about expanding my AI toolkit by deepening my knowledge and adding new skills, since up until recently I had only dabbled.
It was obvious that in order to stay relevant (and employed) I would need to get serious about expanding my AI toolkit by deepening my knowledge and adding new skills
So that was the original “End” I was aiming for: stay employed and employable as a data scientist in this new AI-riddled landscape. The starting point was my existing knowledge of AI in terms of the theory and application of neural network models and NLP1, and my (possibly outdated) understanding of the use cases of AI tools across various industries. My chosen pathway was, essentially, to hit the books. Meaning: do the research, learn everything I can, select the most useful skills to add, put those skills into practice, lather, rinse, repeat. I’d done the same thing 12 years ago when I was just starting out as a data scientist and had so much to learn about the greater data landscape that existed outside of the niche I originally trained in. I could do it again.
And so I rolled up my sleeves and got to work, but almost immediately hit – not a wall, but a flood. A flood of information about various models and tools and use cases but also a veritable deluge of essays and videos and articles, many with very dramatic-sounding headlines.
Now yes, of course, any cool new2 technology is going to get people excited and talking, and if that new technology is deemed ‘good for business’ then certain sections of the internet will get worked up into an absolute frenzy, but . . . this feels different. Whereas before when it came to learning about buzzy new technical topics and their applications, there was of course a lot of information to sift through, but everything was just more, I dunno, straightforward? There were a finite number of emotional reactions and concerns around any particular topic. I mean, people were definitely excited about Big Data and Hadoop back in 2014, but nobody was saying that distributed computing was poised to bring about the downfall of civilization.
And to be clear, when I say I suddenly got hit by a flood of information, I’m not saying that I only recently started hearing about this stuff. I’ve been interested in AI ethics for a while. As evidence of this: three years ago I put together a 2 hour interactive seminar about AI that included a (high-level) explanation of how deep learning models work as well as a discussion on selected ethics topics related to AI. I presented the seminar on November 18, 2022* for an audience at the company where I was working at the time (*note that that was about two weeks before the first free version of ChatGPT was released to the public.3)
So believe me when I say that today, in late 2025, AI chatter is hitting different. And if you’re reading this post right now, I’m pretty sure you know what I mean.
This Feels Different. Why?
Why does the AI revolution or AI boom (or bubble) or whatever you want to call it feel different from, say, the internet revolution of the late 1990s to early 2000s, which ultimately disrupted and changed human civilization, but which we also all survived?
Several reasons. The first is Speed
Of course, any major sweeping change or revolutionary ‘boom’ will move slowly for a long time, then will suddenly move very, very fast. That’s what makes it a boom. But while it took years for the internet to grow and mature into a valuable and eventually essential technology, it appears that the proliferation and adoption of AI is moving at lightning speed. Businesses that 6 to 9 months ago had no direct dependence on AI are now not only embedding AI tools and workflows into every department and level of operation, but also, in some cases, mandating that employees abandon traditional ways of doing their jobs and start embracing AI tools to do them instead.
Even without any such mandate, for those of us who do the majority (if not all) of our work on a computer, we are finding that seemingly just about every piece of software or any online platform or service that we have ever used – and most certainly any new ones – have suddenly started to feature (sometimes quite intrusive and obtrusive) AI components, and these are non-negotiable, popping up wherever we go, whether we’ve asked for them or not.
Which brings me to the second reason: Choice
As consumers and individuals, we are not being given the same choices to adopt and utilize AI tools that we had at the dawn of the internet revolution. Even if you don’t choose to interface with a chatbot like ChatGPT directly – even if you don’t even go online all that much – AI is being increasingly embedded in the infrastructure that underpins daily life.
From energy grids and utilities to financial systems and logistics, many essential services now rely on AI-driven forecasting, optimization, and monitoring, which means that we all end up interacting with systems shaped by AI, whether we’re aware of it or not, and without the option to opt out.
Businesses and other organizations are also acting as though they don’t have a choice in whether or not they adopt AI, as though it has become an immediate competitive imperative necessary for survival. The bosses at the top of the tower have been super clear: ‘Start using AI to increase productivity and decrease costs now, even if it means replacing human cognitive work, even if the outputs are sometimes wrong!’
And this is at the heart of what I mean by Choice being a differentiating factor in this particular technological revolution: it comes down to whether the new technology is an Addition or a Replacement.
The internet started out very much as an Addition, an expansion and augmentation of existing physical infrastructures and processes, providing new options for how we interact with businesses, with information, and with other people. Honestly, it wasn’t even all that useful in its earliest days, by virtue of it being so new and not having enough quality content or a sufficient number of users. It was very much a choice to use the internet back then. Have you ever heard what a dial-up modem sounds like? We all had to make the deliberate choice to bring about that accursed cacophony if we wanted to surf the information superhighway of yesteryear.
And while today the internet has become completely embedded into the workings of human civilization, for a very long time, it wasn’t. For a long time, the internet was an additional option. You could pay your bill with a check in the mail, or you could do it online. You could do your banking in person, or you could use the bank website. You could print out your resume and hand it to the hiring manager, or you could send it by email. Those ors are everything. Those ors are gold.
Because it’s different with AI. There are a lot fewer ors. Sure, I’ve chosen to write this post myself in Microsoft Word rather than prompt an AI model to do it. But. Even as I type this, there is a little “magical pen” icon hovering juuuust off to the left which, if I click on it, will bring up Microsoft’s “conversational AI-powered assistant” with the option for it to “Keep writing this” for me and which, frustratingly, I can’t figure out how to turn off. And even if I don’t choose to use that particular tool, I still automatically get all kinds of red squigglies and blue double lines telling me that I’ve either made a mistake or else The Robot Doesn’t Like My Fanciful Writing Style and (perfectly comprehensible) neologisms.
And I know these are small things, and some of them (like spell check and even predictive text) have been around for years, but it’s an example of an or (you can write on a computer or you can write by hand) being replaced with a but (you can write on a computer, but the Robot will be ever-present and constantly attempting to “Keep writing this” for you.)
Which leads to the third reason why the AI Revolution feels different: The stakes are higher
Believe it or not, I’m not just talking about squigglies. The erosion of idiosyncratic writing styles and individual thought processes is something to be concerned about, but it’s only the tip of the proverbial iceberg when it comes to what AI has put at risk.
Look, this post is not the right place for a deep dive into all the concerns that exist today about AI. I plan to write more about some of them in future posts, and of course there are many other excellent writers and thinkers on this platform already discussing and examining these issues with thoughtfulness and humanity, not to mention considerable knowledge and experience.
For now, suffice it to say that there are multiple, substantial, and very valid concerns about AI, ranging from the ethical to the social to the economic to the political to the environmental to the existential. And some of them are not just vague future maybes; some we’re already starting to witness. And some of those are especially disturbing.
For one, AI has put the systems for trust (and the very concept of truth) at stake, and the consequences resulting from their collapse will be far-reaching and profoundly destructive. We’re already seeing the initial glimmers of this as content and media can be hallucinated out of nothing but then acted upon as if it were true.
Also at stake are the key pillars for humans to learn and employ critical thinking skills (via reading and writing), which are already being shaken and threatened to be pushed over. No one has asked for human cognition or creativity to be shifted away from human beings.4 And no one thinks this is a good idea.
But it’s also already starting to happen. And why?
Well, why do you think? Same old stupid story as always: it’s about money.
Whatever promise AI holds as an extraordinary problem solver (which it absolutely does!) it would appear that the current frenzy of activity and pressure to adopt this technology – the reason it’s getting shoved down our collective throat - is being orchestrated from the top down by a shockingly small cabal of the uber rich and powerful (individuals, but also corporations; I recently read that Nvidia has become the first company in the world to reach a market valuation of $5 trillion USD) and the persistent, capitalist goals of increasing productivity while reducing costs. . . at any cost.
Jesus, what a very old and very boring story. Don’t they know there’s more to life than money? Don’t they know that?5
Obviously I’m compressing a lot of detail and simplifying for effect, but from everything I’ve read so far, it very much seems that the grand push for AI we are experiencing right now is not at all about solving humanity’s most difficult problems. In the short term, the benefits of speedy AI adoption are only monetary in nature and will only go, ultimately, to the dwellers at the top of the tower.
Only bad things remain for the rest of us.
Once I start to pull the thread on the ‘AI Problem’ in this way, I find it very (very) easy to start catastrophizing. I play out the scenarios in my head, fast-forwarding to the worst possible outcomes, like a montage of flickering images at the start of a movie set in a post-apocalyptic dystopia over which a faceless narrator explains to the audience how they ended up there - their voice worn down, weathered, tired.
How Did We Get Here? Or, Only Bad Things Remain
It happened fast, faster than we thought. All our jobs started getting replaced by AI all over the place - across every industry, every sector of human endeavor. Soon, networks of AI agents and workflows became the default operational and decision-making apparatus underlying all of our essential systems: infrastructure, commerce, health, education, government . . . everything.
Problem was, the models powering those systems were becoming increasingly corrupted due to the erosion of truth and their steady diet of recycled AI slop. Chokepoints started to form. A few bad men continued to make one bad decision after another, just as long as each fool step made them one more dollar.
Eventually the first wobbly domino fell, and since everything was connected and automated and no humans had been left in the loop, all too quickly the next one fell too, then the next, and eventually we had cascading systemic failures, each with its own catastrophic consequences.
The social fabric didn’t just tear, it ripped.
And well, you know the rest. It’s like the fella said:
‘Things fall apart; the centre cannot hold’6
Okay, yes, I’m being facetious here, and not a little alarmist. Will AI bring forth the downfall of society? I don’t know. Probably not?
What I do know is that I don’t want to actively work towards that kind of scenario. Nor do I want to passively stand by while any aspect of that hypothetical end state comes to be.
So I feel compelled to reexamine my own end goals where AI is concerned.
I know l want to stay employed and employable as a data scientist, because I love data science and I want to continue doing it for as long as I can - even if that means accepting that AI will probably change how that work looks somewhat.
But I can’t pretend that the lightspeed adoption and proliferation of AI doesn’t have larger and more important consequences.
So I change my thinking and I start again, but again, I start at the end. A different ending this time though: a much more hopeful one. This time the narrator of the movie in my head has a young person’s voice, and on the screen a new day is dawning.
How Did We Get Here? Or, The End I Want to Start From
All the AI stuff seemed chaotic at first, and kind of dangerous, and things were moving really fast, too fast to get properly thought out. But then enough people got interested, and got involved, and eventually we found a way forward that worked.
We figured out how we could integrate and use AI models and tools only where they were needed and wanted, so they could provide new options to enhance and extend - but not replace - human capabilities.
We deployed models thoughtfully and strategically to solve complex problems in areas like climate science, food distribution systems, epidemic readiness, and drug discovery. It’s amazing how much we’ve been able to accomplish so far!
We made sure the ecosystem of AI tools and models was fully democratized - with tons of options - so risks were lowered and gains were better distributed. We erected guardrails in the form of policies and laws to protect against bad actors with even worse motives.
The hardest part was figuring out how to recognize truth and differentiate it from all the shades of non-truth that kept obscuring it - so that scientific progress didn’t stall, and so the AI models stayed healthy and smart. This is actually an ongoing effort that we have to keep on top of, but it’s created a lot of new jobs. There are so many new jobs now! And all of them are safe, dignified, and accessible to more people than ever before.
We still have work to do; we always will, but we are better off now than we were before.
What can I say?
Things stayed together; the center held.
Yeah, yeah, yeah. A nice dream. I know. But not an impossible one. I don’t think it is, anyway, even if we are starting out a little behind.
So what’s my point? It’s what I said before:
You’ll only be able to reach your destination if you know what that destination is. And beyond that, you also need to think about what comes next after you reach it. Meaning, why were you trying to get there in the first place? What’s the greater goal?
Well, what I’m trying to do is to stay employed and employable by expanding my knowledge base and skillset to incorporate AI research, tools and models.
But I will be navigating that path so that it also leads towards that greater goal down the road - that destination farther along in the distance, on the edge of sight, the one where the sun is rising on a new day.
Time to get started.
Natural language processing
AI isn’t ‘new’, but you know what I mean
My timing was coincidental, by the way; I didn’t know about the planned release date. My dorkiness, on the other hand, in volunteering to write and present a seminar on AI, was, and is, eternal.
While no doubt humans will always find ways to Create and Cogitate, I fear they could become luxuries, or privileges, that get inequitably distributed. I have the absolute luxury and privilege right now to be taking the time to type these words and not instead be hustling for any job that will take me, regardless of how much or how little AI they want me to use.
Apparently they do not
W. B. Yeats, “The Second Coming” (1920)






Thanks for writing this, it clarifies a lot. I often find myself applying this 'start at the end' prnciple when designing algorithms or even curriculum, but how do you encourage students or colleagues who instinctively prefer a more explorative, bottom-up approach to embrace this mindset early on? This perspective on goal-setting is incredibly insightful, especially the point about distinguishing between the immediate destination and the greater purpose beyond it.