Firstly, it seems like a big mistake to refer to chat bots like ChatGPT as “AI.” Tbh, that heavy branding/delusion makes me think humans might be much stupider than I feared. Scary! In all the movies and Jetsons cartoons, it was basically metacognition that made the robots endearing and/or relatable and/or artificially intelligent. I don’t care how many GPUs are running, they don’t do metacognition: thinking about thinking.
ChatGPT is a “black box” chat bot. You can submit the same input twice and get two different outputs. That is because they’re “non-deterministic.” While that nature is sold as a benefit, it makes debugging completely impossible, which has several undesirable consequences.
The history of computers from the beginning until the public release of ChatGPT was a story of binary: zeroes and ones. ChatGPT and other similar (so-called) AI chat bots are different. Yes, they absolutely are still binary at a fundamental level. They use zeroes and ones in the software that runs — absolutely. However, the interface itself allows users to interact with the software with a much more amorphous manner.
The binary nature of computers aggravated people and led to a (more) stratified society. AI is routinely referred to as an “equalizer” where people no longer need specialized skills in order to produce computer software, but realistically, that’s a fuckin lie. Realistically, I would never want to run vibe-coded software on my device. Realistically, no enterprise entity with any quality assurance process is ever going to allow vibe-coded software to come out of its organization (I hope!). Like, probably the most popular way to abuse software is by injecting malicious inputs so as to hack it. And I’m supposed to believe vibe-coding is some awesome breakthrough? Give me a break. 🙄🙄🙄
What (so-called) AI has led to is a technological landscape wherein many of the benefits of the digital paradigm (clear answers; zero or one) have eroded. Whereas computers seemed overly “rigid” before, ChatGPT and similar chat bots are extremely flexible with facts by design, often minimizing the differences between right answers and wrong answers, leaving a sort of grey behind, where only black or white could be correct. One supposed benefit of (so-called) AI is that users can now have an individualized interface rather than one that was designed by a stranger.
The consequences of this approach are profound. For one, it makes it quite difficult to provide instructions. Throughout human history, humans have relied on specific interfaces and specific ways of doing things. Let’s say some particular method isn’t the most efficient or whatever. Sure — I agree. However, everyone uses the same interface, so instructions can be communicated and simplified and standardized. Now, that standardization is no longer accepted as fact. It’s like returning to idiosyncrasy — like the Ford assembly line stepping backwards in history. And not to put too fine a point on it, but don’t you suspect there are benefits to learning how to do something in a way that you don’t intuitively do it?
I think (so-called) AI is great. I fully believe it can be used for tremendous benefits related to industrial production, drug development, materials science, “and such as.” However, I believe it’s currently being marketed as some magical oracle, which it most certainly is not.