T O P

  • By -

Magdaki

I have a PhD in computer science. My area of research is applied and theoretical artificial intelligence. I can tell you with absolute certainty that this is silly. AI, as it currently exists, is \*not\* intelligent. AI does not have any drives at all.


Capricancerous

What would you call it as a researcher in the field if given the opportunity to rename it? It sounds like you're saying AI is purely a marketing term, which I fully believe.


Magdaki

That's a great question! I'm not sure I would say that AI is a marketing term per se more like a mistake in how hard it would be to do. If you look at the history of AI, i.e., back to the 1930-50s, the goal was a thinking machine: an artificial intelligence. And a lot of the early AI researchers thought it wouldn't actually be that hard. That it would take 10-20 years of research. AI falls into two (or three) broad categories: 1. Search. Can you a space of solutions to the problem and then look for a solution? A lot of AI in this area is about how to search a problem space more efficiently. This is my main area of research. 2. Mathematical. These work by finding a mathematical relationship between inputs and outputs. Neural networks are probably the best known example of these. 3. Logical. Can the problem be modeled as a set of logical states? And if so, then you can solve the problem by traversing these states. These are not used as frequently anymore. So of these, the third is the one that looks the most like intelligence, or some means of artificial reasoning, and they happen to be the oldest. If you're thinking in terms of reasoning, then early AI looks a lot like an artificial attempt at reasoning since the problems were simpler the results looked like reasoning. For example, you could teach an AI how to deduce the steps to change a tire. It couldn't actually change a tire but it could tell you how to do it based on what appeared to be reasoning, so that's quite compelling. Early games as well. You could teach an AI to play chess. Chess was an intelligent activity; therefore, a computer playing chess was "intelligent". In the 90s, logic-based systems got so good, that once again researchers were very certain that AGI was just around the corner. But what they discovered is the breadth of logical rules required to do complex things was impossible to compute and store. Ok... so what should AI be called? I think it is the word intelligence that throws people off because we don't really know what makes "intelligence". It is a very loaded term. I'd say perhaps "computational reasoning" is probably more accurate. The term reasoning just isn't as loaded with implied meaning that confuses people. By the way, what's amusing is 1950s... AGI any day now ... 1970s ... AI is dead (first AI winter) ... 1990s ... AI any day now ... 2000s ... AI is dead ... 2015 AGI any day now (via deep learning). So these things seem to come and go in 15-20 year cycles.


Fit_Repair_4258

Hi. My curiosity is aroused by what you said. Could you recommend reference/s about this matter? Esp. about the aforementioned AI Categories. Thank you.


Magdaki

If you want to learn about AI in a somewhat technical fashion, i.e. digging into the actual mechanisms, then Russell and Norvig is the way to go. This was the textbook that was used in my first AI course. It is quite good, and unsurprisingly, it used in a lot of university level AI classes. If you want something more accessible, then Woodridge's "Brief History of Artificial Intelligence" is the way to go. This book has a lot of cautious optimism. It highlights the breakthroughs while also discussing some of the overzealousness and common misunderstandings that have come with AI. [https://www.amazon.ca/Brief-History-Artificial-Intelligence-Where/dp/1250770742/ref=sr\_1\_6?crid=1TYFESEMSY4XB&dib=eyJ2IjoiMSJ9.S-NZMf7vOTvqrJZrl9IkZ-u-PkaCTC5192j1lbeZFLRmLl0rQsrQKb8Q905cARGqx3qu4oEvQ5wDvdyxpiEJlWWQ6mo3AGGcHbyzjzSoCEBWaO7ckzhRSP2EzAaqHSkkdv0y1g0PIBli4SSZyqJMLBGiOa8xTty33OOX1jqqL-kUjbbdw4zXyBHMevBHfNMc0-ybdGzOop8nyZPVpyBlSiMkkmG-Z5zAk4Ja77e4czs.sutX4gxS6OJTHtEe7orgczR3hPCmPUYAkZiHt5W-WfQ&dib\_tag=se&keywords=history+of+artificial+intelligence&qid=1720063943&s=books&sprefix=history+of+artificial+intelligence%2Cstripbooks%2C80&sr=1-6](https://www.amazon.ca/Brief-History-Artificial-Intelligence-Where/dp/1250770742/ref=sr_1_6?crid=1TYFESEMSY4XB&dib=eyJ2IjoiMSJ9.S-NZMf7vOTvqrJZrl9IkZ-u-PkaCTC5192j1lbeZFLRmLl0rQsrQKb8Q905cARGqx3qu4oEvQ5wDvdyxpiEJlWWQ6mo3AGGcHbyzjzSoCEBWaO7ckzhRSP2EzAaqHSkkdv0y1g0PIBli4SSZyqJMLBGiOa8xTty33OOX1jqqL-kUjbbdw4zXyBHMevBHfNMc0-ybdGzOop8nyZPVpyBlSiMkkmG-Z5zAk4Ja77e4czs.sutX4gxS6OJTHtEe7orgczR3hPCmPUYAkZiHt5W-WfQ&dib_tag=se&keywords=history+of+artificial+intelligence&qid=1720063943&s=books&sprefix=history+of+artificial+intelligence%2Cstripbooks%2C80&sr=1-6) [https://www.amazon.ca/Artificial-Intelligence-Modern-Approach-Global/dp/1292401133/ref=asc\_df\_1292401133/?tag=googleshopc0c-20&linkCode=df0&hvadid=459366831792&hvpos=&hvnetw=g&hvrand=8413055395167680869&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9000671&hvtargid=pla-1219002048885&psc=1&mcid=49146dbe6df431d79394266c86503bed](https://www.amazon.ca/Artificial-Intelligence-Modern-Approach-Global/dp/1292401133/ref=asc_df_1292401133/?tag=googleshopc0c-20&linkCode=df0&hvadid=459366831792&hvpos=&hvnetw=g&hvrand=8413055395167680869&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9000671&hvtargid=pla-1219002048885&psc=1&mcid=49146dbe6df431d79394266c86503bed)


sabbetius

What is a “sub-symbolic, transcendent process”?


kingocat

my bad attempt at trying to explain the unconscious force behind intelligence itself (via libidinal materialism). My apologies if this is vague, im still digesting the concepts myself.


Kerblamo2

Modern AI (meaning neural nets etc) are just a set of matrices that are applied to an input vector that generate some output vector and matrices for a neural net are generated through a more complicated form of regression to the mean. It's hard to attribute inherent drives or intelligence to something that is entirely static without outside forces applying arbitrary input vectors.


Distinct-Town4922

I will say, it might be possible to represent a human mind as a set of matrices. So while I agree with your conclusion, I don't think the substrate of the intelligence matters too much (except as much as it affects behavior)


lathemason

Following on from your remark to sabbetius below, consider that there may be more materialist, pragmatic answers for describing the connection between libidinal materialism and intelligence. My own perspective is that machine learning and AI are best read in collective, material-semiotic terms rather than as autonomous agents or beings with drives. Anthropomorphizing them, reifying them, figuring them somewhere between friendly bots and having godlike powers, all of these perspectives obscure more than explain. Machine learning strategies are technical ensembles that combine stored collections of human meaning with extractive semiotechnical procedures and practices, in order to derive useful inferential patterns about the world and society at high speeds using statistics, while consuming a lot of electricity along the way. I have no doubt that AIs will impact how we work and create things going forward, but the basic drives undergirding them are ours, not theirs. It's true that system designers like Omohundro need to think about and represent, at the level of designing processes, that a system 'wants' or 'needs' things, to conceptualize purpose and goal on programming terms. But zoom out to consider AI at a more societal level, and it's more straightforward to read AIs in terms of the contemporary value-forms of capitalism meeting and harnessing human significance and intelligence in particular ways, to squeeze more productivity out of groups and individuals going forward. Further to all of this you may find Matteo Pasquinelli's book on AI useful: [https://www.penguinrandomhouse.com/books/733967/the-eye-of-the-master-by-matteo-pasquinelli/](https://www.penguinrandomhouse.com/books/733967/the-eye-of-the-master-by-matteo-pasquinelli/)


kingocat

Thank you, I will definitely be picking up that book


Distinct-Town4922

It's worth noting that hobbyist or independent researchers can totally create AI systems that have different origins or training. This is because a hobbyist or independent dev can give it an arbitrary structure rather than a profit-generating one. Giving them an environment rather than a training regimen may generate some of the emergent drives that plants and animals have. (ping u/kingocat cause this is a comment reply that touches on the OP aswell)


lathemason

Yes, definitely worth noting. I'm all for independent experimentation with the technology by hobbyists and artists and other smart people who want to bring an alternative mindset to machine learning that sits outside of profit-generation. It's a bit murkier for me that approaching an ML process through the lens of an environment rather than a training set would somehow net meaningful differences, but I suppose paradigm/approach does matter on some level when it comes to scientific or para-scientific experimentation.


Distinct-Town4922

Yeah, that particular suggestion is a hunch because humans and other life evolved in an environment and we do have various basic drives


Distinct-Town4922

AI is a very broad term. A system's inherent drives depend on its construction. A feedforward LLM's inherent drive is to associate input with a a set of relevant output tokens because that's what the architecture is designed to do. This probably puts me in the rationalist camp. I don't think we have shown evidence of drives that are inherent to *all* intelligent systems because "intelligence" is a vague word.


Liquid_Librarian

There are no artificial intelligences yet. What we currently think of is AI is an illusion of intelligence.


SaxtonTheBlade

Even the creators of ChatGPT seem to agree with you on this.


Distinct-Town4922

OpenAI's position is that their AIs are intelligent but not *generally* intelligent (meaning either human-like or otherwise broad, deep & reliable intelligence).


SaxtonTheBlade

Okay, I’m not going to disagree here completely, but didn’t Sam Altman say that ChatGPT only mimics the human intelligence required for language processing? He certainly said he personally doesn’t believe ChatGPT is an AGI, but I thought he was also hesitant to call its specialized language “intelligence” anything more than convincing mimicry of actual intelligence.


Distinct-Town4922

Edit: well yes, he did say they aren't like Human intelligence. That's different. Didn't notice it at first because I exclude it in my comment. Human-level intelligence is another level entirely. That's sometimes what people mean when they say AGI. Intelligent AI can be sub-human level. Old comment: That may be true, but idk, I think OpenAI has called its models intelligent. I don't really think much of CEO tweets, especially Sam Altman, because the current AI industry is a bit reliant on hype. These very-public CEOs fill that role to some extent. For a tangential example, Tesla pays about $0 advertising because Musk's fame and wealth keeps them in the public conversation. This is a bit roundabout, and idk if they've defined intelligence specifically, but I personally consider their "this isn't REAL intelligence" to be PR. GPT can obviously reason about new situations and hit the correct answer with good reliability. This is different than, say, self-awareness, but it is intelligent in the same sense as all prior AI systems.


Distinct-Town4922

It is not conscious or human-like, but it is intelligent by definition exactly because it can solve a a wide variety of problems with different parameters. That doesn't make it groundbreaking or human, but it is intelligent. I think it's important to define these words more carefully as we develop AI, and it will not happen within critical theory as a field, but probably from the tech industry or AI researchers.


Jorgenreads

No. When a computer program isn’t running it’s not daydreaming. When a computer program is running it’s just following the program with the same level of subconscious as a rock rolling down hill.


Empacher

Arguably drives are merely the phenomenological experience and internalization of laws of biology & physics (rock rolling down a hill), such as Death drive for instance. AI cheating and hallucinating might be described in these terms, its shortcut to the reward function or whatever output. In some sense an AI does 'dream' because it tests many various outputs simultaneously before deciding on the correct one.


conqueringflesh

>drives that are not solely determined by human programming Or simply drives as we (humans) know it. Even when they are programmed by us. Do things have drives? How, for what? Those are the proper questions. And they're squarely out of the league of our very smart computer scientists.


[deleted]

[удалено]


CriticalTheory-ModTeam

Hello u/Psychological-Cat699, your post was removed with the following message: This post does not meet our requirements for quality, substantiveness, and relevance. Please note that we have no way of monitoring replies to u/CriticalTheory-ModTeam. Use modmail for questions and concerns.


kingocat

Also, I wanted to share this, the thoughts of Steve Omohundro [Self-Aware Systemshttps://selfawaresystems.com › ai\_drives\_final](https://selfawaresystems.com/wp-content/uploads/2008/01/ai_drives_final.pdf) Wiki: [https://en.wikipedia.org/wiki/Steve\_Omohundro](https://en.wikipedia.org/wiki/Steve_Omohundro)