AI Clarification

I've been mulling over your questions in response to my Copilot post all evening and will hopefully have a more fleshed-out response in upcoming days but I wanted to at least give some initial responses.

..you have to understand the system as a whole to make a proper argument, and whatever answer you return with is going to be dependent on that system and its internal rules.

This is an excellent point and, indeed, it's merely AI in its current form that I object to. Or, actually, that's way to simplistic, AI has been around for a good long while and has been doing a lot of good and probably a lot of bad during most of its existence. But it's this class of LLM tools wielded by these specific companies, trained on everyone's data in this specific unregulated and over-hyped environment that I object to. I don't know who to blame and maybe I'm just flailing a bit. I mentioned this in my last post but maybe it's mostly a matter of taking advantage of how much faster tech moves than government. It's a regulatory arbitrage of sorts, if that's the right way of putting it.

So I absolutely don't object to AI generally, and I think your nuclear energy analogy is a good one. I've heard people suggest that we should think of AI more like electricity or the industrial revolution in terms of how fundamental of a shift it will bring to society and I suspect they may be right. So I don't have any illusions that it will go away, or that I can opt out permanently (or that we necessarily should even if we could). So, to clarify, my dilemma really does have to do with using it in it's current form.

Maybe, given what I've just said, I should be more forgiving of the hype-cycle. But the kind of sky-falling, all-encompassing AI circus like the agenda I shared in my last post are just so off-putting. I do think there's a lot of what you suggested–companies racing to capitalize on trends and investments regardless of whether they have anything substantive to offer, or whether the technology makes any sense at all in the context in which its being sold. What does "AI first" even mean? Or what does it mean to "automate growth" with AI? It reminds me of how "cloud" was slapped onto everything for awhile, or "internet of things". Similarly, more esoteric technologies like microservices, NoSQL, streaming databases and the like were so ubiquitous that you'd swear everything you knew was now obsolete. It's not a perfect comparison since these AI tools are way more general and can and will be used to build all sorts of things but I do think the one or two key tools behind recent advancements are being stretched a lot.

Going back to the conference I mentioned before; as an admin, I still need to know how to do all of the crap I needed to know how to do before ChatGPT got here, but there are almost no general training topics at the event. To be extra cynical, maybe that's because you can't up-sell teaching people how to use what they've already bought but you can up-sell them an AI assistant to do that instead. Further, it sure seems like 90% of these new tools are comprised of API calls to ChatGPT bolted onto existing interfaces, after which they turn around and charge obscene fees for saying you've magically joined the AI revolution. And, maybe more importantly, given that these are all OpenAI-backed tools they all have the baggage and ethical issues I mentioned before.

So, I guess I've talked myself out of being more forgiving of the cycle.

I do have some hope that this settles over time. I would be happy to use an ethically-trained LLM for most of the things ChatGPT is currently being used for (I already happily use code completion, text suggestion and the like). I suppose it's an open question what an "ethically-trained" model really means in practice, and whether it would be any good. But even without that I believe that there's a lot of room for more narrowly-scoped models, ones trained for narrow purposes on topic-based or local-only data. I'm not sure what the popular appetite for that kind of thing would be though, and whether companies would invest in those kinds of tools when far more general far more powerful ones exist and are sucking up all of the oxygen, but maybe that's a space that opens up once regulations hopefully kick in.