The tech world is calling it “the greatest human invention of all time.” I believe it will prove incredibly useful, but for reasons different than most think, but first, a prediction. One of the critiques of Christian scientific inquiries is that they are non-falsifiable or untestable, but I don’t think that’s quite true. One way to not-quite falsify a framework is through its ability to make predictions. The Intelligent Design movement, for example, successfully predicted that “junk DNA” would prove to be functional, something macro-Darwinists cited for decades as proof of an unguided, blind process of evolution of lifeforms with different body plans from a common ancestor. About a decade later, the ENCODE studies showed that non-coding DNA regions were functional in regulating gene expression. Either unguided evolution is much more intelligent than evolutionists initially theorized, or not much junk has accumulated because the evolutionary histories of most species are shorter than assumed.
Similarly, I’ll predict that we will never achieve AGI or artificial general intelligence. Philosophers have challenged the simplistic computer scientists in their assumption that consciousness magically emerges once enough computing power comes online. AGI advocates dismiss this by claiming that consciousness is simply an illusion or delusion that sits on top of a sufficiently complex computer, like the human brain. If that’s true, who or what inside of us, our experience of the self, is observing the illusion or delusion of consciousness? We even observe ourselves observing things, and this has been the fundamental human experience. Animals, for example, largely don’t have a concept of the self (don’t recognize themselves in a mirror, for example), while humans do. If the Christian concept of a soul is correct, AI will never have this, though it may be able to emulate it. I have no doubt we will see the emergence of AI relationships or even AI religions, but all will be emulated, no more real than a video game or TV show. AI will remain dependent on good prompting to produce the right emulation; for the prompt, the will or the desire is where the human part uniquely resides.
AI so far seems really good at two things: restating or summarizing existing information produced by humans, and translation, not only of human language but human language into computer code. That so many people see this as revolutionizing white-collar work simply demonstrates how inane many people’s work is (Dilbert was right all along). The most useful tools so far are “summarizing meetings” and so forth. Well, most meetings in business are composed of people spouting opinions without evidence, with very little actual content, so yes, we should not be surprised that this can be compressed and summarized easily. Any properly run meeting shouldn’t need a summary other than actions or decisions made, and the people responsible should have those already when it’s over.
I’d make a comparison to file compression. Most people’s words and ideas have a lot of fluff, and AI is successfully identifying the few concrete ideas in paragraphs of fluff and can restate that with new fluff or summarize it, removing the fluff. If your job is to “strategize new synergies” instead of falsifiable, data-backed, actionable work product, yes, AI can replace you, or maybe even demonstrate how little value those roles bring to a business since that work product is now indistinguishable from a computer’s randomly seeded gobbledygook.
AI will not replace soft skills, those involving directly managing or persuading other people. AI tools may help with this but carry a big risk. Authenticity is the key to influence, and using AI, if people find out, will ruin their trust which is based on your genuine care for them as people. And it’s limited to text-based communications at the moment, the poorest format for emotionally laden conversations. I suppose managers may eventually have earpieces that listen in to conversations and propose responses, but the delay in processing proposed responses will make those conversations awkward on an individual level. Or maybe phone conversations will be completely emulated by AI. But the minute people find out someone is using a computer to interact with them, they will be majorly creeped out and all trust will be gone. Expect an arms race between AI seeking to emulate high-cost human interactions (whose value is proportional to their perceived cost and authenticity, like phone conversations) and technologies to filter out AI spamming of human interactions.
We will likely need better laws to deal with these issues. If we value sincere human interactions, we need penalties for the use of AI to parasitically emulate humans and ruin the channel. Similarly, we may need to reform copyright law since generative AI is essentially a remixer of existing copyrighted content. We don’t allow copyrights to protect ideas, only their expression, to allow humans to reprocess and re-express those ideas in new formats. But when the expression of ideas becomes a mechanical function, how then do we protect intellectual property such that humans are still incentivized to produce it?
Let’s say you are a physician who wrote the authoritative textbook on liver tumors. OpenAI scrapes your book and then sells your work product for pennies to anyone who wants it, because the ideas in the book aren’t protected, just their expression. The algorithm, though, is so complex that even OpenAI can’t trace down exactly what references were used to provide answers. We may need laws so draconian as to require AI companies to license every source they use to feed the model and to require the models to tag sources for any output produced (which would vastly increase processing costs), with payments for any use. Google was parasitic enough on information creators in linking to sources, but what happens when AI generates its own answers derived from human sources and runs ads next to that? Expect big players to get involved in shaping this legislation.
The courts could come to the rescue with a logical expansion of the existing derivative works doctrine. Copyright law holds, for example, that a novelist owns any derivative works based on his or her work, such as a screenplay or translation. Copyright law also holds that machine processing is not a creative act and adds no protectable element of intellectual property. A very simple, but severe, interpretation could be as follows: any use of any given training text without proper licensing that results in a similar output to the copyrighted work is statutory infringement, subject to fines of $25,000 to $75,000 per infringement, without requiring any proof of actual damages.
Arguably, each and every output of an LLM model is a separate act of infringement. Similar to ADA litigation, enterprising lawyers could simply hire users to set up ChatGPT accounts to create these acts of infringement ad infinitum. If LLMs are designed, on purpose or by accident, to disable any tracing of sources, then a court may assume, based on rules surrounding sanctions for lack of discovery, that the unlicensed training text is infringed if it shows any surface similarity to the alleged infringed work. If they must trace sources, likely the algorithms become slow and useless, as it’s in the nature of machine learning that those who build them can’t really explain how they work. Result: all LLMs are bankrupted overnight and crippled by endless litigation. The courts are simply going to have to decide if Silicon Valley is allowed to steal everyone else’s creative output and sell it for $20 per month.
Legal issues aside, what is AI good at? So far, it is very good at editing, categorization, and translation. The LLMs seem to have finally cracked the strange rules of English grammar. Perhaps most powerful for semi-technical users is its ability to turn pseudocode into computer code for simple scripts, a skill similar to its language translation abilities. This is what I’m personally most excited about. I am technical enough to adequately specify most computer programs I would like built, but not skilled enough in coding to execute those specifications. So far, AI seems like a boon. It will make proficient coders superhuman coders, and technical non-coders proficient coders. But here it is merely translating at a high level. The non-coder is still subject to garbage-in-garbage-out and must present a coherent specification to translate into computer code. GPT4 can reference a short novel’s worth of information as input (including its own output), but the code and documentation bases of anything truly complex will rapidly exceed this. ChatGPT already heavily limits the length of inputs even into GPT4 (way below this threshold), forcing anyone wanting to use a larger input to pay for the API. Having it process larger bodies of work will become exponentially more expensive.
The capabilities of AI are also rapidly outstripping the growth of computing power. GPT3.5, so far, appears to have limited use cases other than amusement or producing spammy, low-information content. GPT4 is extremely impressive but is ten times more expensive, API access is still limited, and usage is capped. It’s still not a system that can be fully trusted without human auditing for critical work, but perhaps GPT5 will exceed human capabilities. I could be wrong, but the models will soon exceed the hardware capabilities we have to scale them. Moore’s Law has been slowing down for decades and only so much efficiency can be wrung out of existing algorithms. I fear the “turbo” versions are equivalent to highly compressed images: speed and storage come at the expense of artifacts. We can’t trust that this is a valid product until the tech bubble model of gathering eyeballs at any cost is gone and users must pay for their use of scarce resources. At the moment, OpenAI is losing millions per month running the models publicly.
One thing to keep in mind is that the AIs are not all that smart, yet, but can appear to be because they have instant access to all accumulated human knowledge. They know arcane trivia, but their ability to abstractly combine those ideas remains limited. It just turns out that a lot of white-collar work isn’t all that abstract, just word salads. The human brain remains the universe’s most impressive object (especially on a power efficiency basis), but it’s very poor at remembering things quickly and makes easily correctable errors like typos at a surprising frequency. AI-augmented human workers with high innate capability will be impressive indeed.