I don't think that anyone who is not developing AI ...
by Ofcr. Tim McCarthy (2023-03-25 12:21:03)
Edited on 2023-03-25 12:42:35

In reply to: Levi's will begin using some AI models in ads (link)  posted by Raoul


... is even remotely ready for AI. I read Bill Gates' recent note on the subject (linked), and even his take suffers from the same fundamental incoherence and the same wishful thinking (or, to some extent, gaslighting) as pretty much all other attempts to assess the likely effects of the onset of the stuff. To wit, those attempts regularly tell us two inconsistent things:

i) The technology is swiftly going to exceed human capacities and thus will be absolutely transformational across pretty much all domains -- as an interviewee told Thomas Friedman (for some reason that is lost on me), it's "going to change everything about how we do everything."

ii) The disruption won't be too bad, though, because these intelligences are just going to be our helpers. As Gates tells us, "the demand for people who help other people will never go away. The rise of AI will free people up to do things that software never will—teaching, caring for patients, and supporting the elderly, for example." "AIs will empower people to do [sales, service or 'document handling'] more efficiently." Access to AI will "be like having a white-collar worker available to help you with various tasks." AI will "help health-care workers make the most of their time by taking care of certain tasks for them—things like filing insurance claims, dealing with paperwork, and drafting notes from a doctor’s visit." AI aimed at education "will know your interests and your learning style so it can tailor content that will keep you engaged. It will measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to."

Neither Gates' attempt, nor many other of these attempts, however, explains why these developments might be limited simply to "helping people be more productive." More specifically, it doesn't explain why people will remain necessary in many of these domains at all. Gates begins his essay with the example of GPT-3 (that is, the previous generation of the technology) passing the AP biology exam with 59 correct answers out of 60. And that's little surprise -- as Gates tells us ...

"The amount of data in biology is very large, and it’s hard for humans to keep track of all the ways that complex biological systems work. There is already software that can look at this data, infer what the pathways are, search for targets on pathogens, and design drugs accordingly. Some companies are working on cancer drugs that were developed this way. The next generation of tools will be much more efficient, and they’ll be able to predict side effects and figure out dosing levels."

If this is true -- and it seems that it is -- then why would be anyone be concerned about teaching students biology anymore at all? To the extent that humans might be necessary to run lab or manufacturing operations, the overwhelming majority of those relatively-few humans won't need biology degrees to do it. And why would anyone choose to study biology and make a career of it? You aren't going to discover anything, the operation of your curiosity will be confined to confirming or ruling out the propositions developed by the machine, at best, and you will be paid accordingly.

Indeed, there's a very open question as to why people might continue to learn at all. If the biology AI can do all the biology and the comp-sci AI can do all the coding and development and the engineering AI can do all the engineering, then why should we care that the education AI will "know your interests and your learning style so it can tailor content that will keep you engaged[, and] measure your understanding, notice when you’re losing interest, and understand what kind of motivation you respond to"? Not to mention that, when faced with the prospect of undergoing that course of surveillance and manipulation, anyone who has chosen to enroll in a course for some reason can just enlist a specialized CheatGPT to manipulate ChatGPT right back, and turn instead to playing video games and jerking off.

And if you do go ahead and study a subject, and not cheat and actually learn, then what awaits? "Company-wide [AI] agents will empower employees in new ways. An agent that understands a particular company will be available for its employees to consult directly and should be part of every meeting so it can answer questions. It can be told to be passive or encouraged to speak up if it has some insight. It will need access to the sales, support, finance, product schedules, and text related to the company. It should read news related to the industry the company is in." Nevertheless, Gates assures us, "I believe that the result will be that employees will become more productive." Really? In the presence of an intelligence that knows everything about the company and its markets at all times, and that is capable of better strategic and operational analysis than any employee including the CEO? Forget whether the AI "should be part of every meeting." Why should there be meetings?

Everyone senses this tension, and when approaching the heart of the matter most everyone who writes on the subject responds that, well, yes, there will be disruptions, so the government has to figure how retrain everyone so that they can fit with the AI. And it's really not urgent or anything, because really strong AI is still years away. We can relax, because the terminators aren't here yet. But of course that's no answer at all. The specialized, non-general AIs that are already here will -- will -- do plenty of damage all by their juvenile-terminator selves, long before their descendants arrive on the scene.

This isn't an argument for banning the stuff, because that's a pipe dream. For the same reason, it's also not an argument for regulation -- as a republic we are much less capable of regulating AI than we are of understanding it, and as a republic we don't understand it at all. There'll come a time for productive discussion of how to regulate these capabilities, and that time will be after it has thrown us into multiple crises on top of the ones that we have on our plate now, assuming arguendo that by then we haven't turned over the work of governing ourselves to Chat-GGPT (Governmental Generative Pre-Trained Transformer).

Instead, I suppose -- because I didn't really think through what this might be an argument for, before I started -- this is an argument for only one meager little proposition: When you read things about how AI isn't going to be terrifyingly disruptive, that it's just going to be our faithful helper, that the real risks are still years away, don't buy it. Start thinking hard about what it might do to your life now, because it's here now, in plenty enough force to f*ck a lot of people up.




Replies: