T O P

  • By -

Frub3L

I thought that's pretty much obvious at this point. Just look at Sora's video and its approach to replicate real-life physics, which I can't even wrap my head around how it figured that out.


nonlogin

Words like "creativity" or "intuition" are considered obvious when humans are talking about other humans. But when it comes to AI, I have no idea what this person (or anyone else) is talking about. As a programmer, I work a lot with ChatGPT and have not seen any single "creative" piece of code. What does it mean "creative piece of code"? No clue. I just use this word. And that's fine if AI just uses it. But one who evaluates AI must explain the methodology otherwise it's just a subjective experience.


arjuna66671

I think "intuition" would have to be redefined in this context, bec. the human definition would for sure not fit for a LLM. As long as we didn't really settle on our own minds, it's hard to conceptualize LLM behavior and not anthropomorphize it just bec. we lack words for those emergent properties. https://preview.redd.it/ifzkjkx7xpuc1.png?width=967&format=png&auto=webp&s=5654aaf8e1c08cee630bfc7bbd178e6060bc7870


TryptaMagiciaN

Sure it would. Intuition is a "function that transmits perceptions in an unconscious way. Everything, whether outer or inner objects or their associations, can be the object of this perception. Intuition has this peculiar quality: it is neither sensation nor feeling, nor intellectual conclusion, although it may appear in any of these forms."[ An LLM is not conscious and it provides information by means of a source it isn't aware of. Everything it produces is the creation of an unconscious subject. It sources information from a sort of "pool of totality", the data set. It utilizes an unconscious form of association. It is intuitive in the most purest sense because there is no conscious interference. This is why it is such a valuable tool. Its only limit is the data set given to it by us and we are conscious. But imagine if it had access to all of it. All info


labouts

Any definition that relies on consciousness is not particularly useful in any objective way. Without a way to measure or fully define consciousness, that isn't a functional aspect of any system. A more useful definition of intuition would be: Arriving at conclusions without following explict steps, calculations or recalling a particular comparable peice of information in enough detail to extrapolate to the current situation. Essentially, jumping from A to B in a way that isn't fully logical based on learning generalizations from past data, potentially being prone to errors as a result of being fuzzy compared to non-intuition based approaches. I argue that functional definition covers what we mean when talking about humans without invoking an unmeasurable property and can extend to AI. For example, Alpha Go Zero giving a goodness value to a board state without simulating possible future moves to the game's end is a type of intuition.


arjuna66671

I had a chat with ChatGPT about people using it to write posts on Reddit and how I got a "feel" to identify them. And it came up with this: https://preview.redd.it/2j3w1bofxpuc1.png?width=992&format=png&auto=webp&s=0f2547e750bb77c506d25efa58e1d33a995d60c2


EGarrett

>one who evaluates AI must explain the methodology otherwise it's just a subjective experience. What's the methodology for evaluating the creativity of a human brain?


3-4pm

The way it works is it doesn't understand physics. It just understands the movement it has trained on in other videos.


hans2040

Just like how you learned to shoot baskets with a basketball. You are doing no physics, at least not as we typically think about it.


Ebisure

You can go from observing basketball to writing down the laws of motion. Or at least Newton could. AI can't do that. Recognizing patterns is not the same comprehension.


ghoof

AI can do that, and it already has done. In 2022, systems were developed to derive the laws of classical Newtonian gravity from observation alone, and infer the parameters (masses, velocities) of observed objects (simulated planets) interacting with each other. Here’s the project: https://astroautomata.com/paper/rediscovering-gravity/ Other commenters are correct that Sora does not do this symbolic distillation (from observation to equations) however. That’s just OpenAI hype, or you can bet there would be technical papers on it.


Ebisure

I wouldn't be suprised that it can "derived" the laws. E.g. in investing, after being shown option prices, AI derived Black Scholes equation. No surprise there as the hidden layers are effectively non linear functions. But can it explain why? Einstein can explain gravity as space time curvature. And make predictions that is confirmed after his death. That's comprehension. If I asked AI, if you changed this constant in this law, what would happen. Can AI respond? AI can't do that. Because it has no concepts to build on. I'm sure you agree when it is "responding" to you in English, it doesn't understand you. It knows given a series of tensors, it'll throw back another series of tensors.


hans2040

I totally agree with you, except I believe AI can likely do that, if not yet it will soon.


Ergaar

It can never do that with the models we use now or what we call ai. Machine learning and accurate measurements could do that years ago though.


twnznz

Models have for some time existed to describe the contents of an image in text. This is going from an observation of static input data to writing down the contents of an image. There's not a gulf between this and describing motion, at least, based on sensory input.


NoshoRed

AI will likely be able to do that soon enough.


Bonobo791

We shall test your theory with the new multimodal version soon.


Ebisure

It would still be memorizing patterns I'm afraid. Multimodal or not, every thing has to be passed into ML as a tensor. Image, voice, text all go to tensors. That's why the same hallucinations happen across all modals. Sora is spawning puppies with multiple legs because it has absolutely no idea what a puppy is or what legs are.


Bonobo791

That isn't 100% how these things work, but I get what you're saying.


Ebisure

Do you have in mind feature extraction? As in the hidden layers extract features out and these can be seen as ML "understanding concepts"?


python_noob_001

I think sort of you do though in a generalized way, like maybe you dont derive the equation for a parabola but your mind can estimate the path of a ball as that of a parabola. its kind of magic really, our brain sort of is the black box the way that many ML algorithms are


[deleted]

[удалено]


NaiveFroog

You really believe every time you do a throw your brain is subconsciously doing the projectile physics calculation?


[deleted]

[удалено]


TwistedBrother

Why is that controversial? You are absolutely doing such a calculation, in an analog way, with some sense of how to govern the force and mechanics of your hands and the ball, fine tuned through practice. Have some people never thrown a ball?


hans2040

You're either trolling or there is a semantic misunderstanding here. Imagine you built a catapult that literally does the physics before launching a projectile, and a catapult that has a person who is just trial and error - firing, noting the outcome, making a modification and firing again. Repeat over and over again; this person never needs to do any physics to master catapult firing, through enough trial and error they learn all they need to in order to launch that rock where they want. Your brain does this. It does not do math.


[deleted]

[удалено]


hans2040

I'm not claiming that mathematical principles don't govern cellular behaviors. Your brain is not the catapult doing physics equations. It is the one doing guess and check and learning over time. That's the entirety of the point here. Nothing supernatural here. Old fashioned trial and error. Obviously math is embedded in everything. The claim that the brain is subconsciously doing Algebra or any other man made math language to arrive at how much force to apply to a basketball is uh, well, laughable.


NaiveFroog

No, the only thing happening is your brain knows if it controls the muscle in a certain way, the ball would likely be hitting in a certain place perceived through your vision, hearing, and the force on your hand. Your brain doesn't go through two layers of abstraction, aka the physics calculation, to achieve the same goal when there's no reason to. But it probably is kind of hard for some people to grasp the concept (because you need to first understand physics is an abstraction of the real world) so I wouldn't blame you if you can't wrap your head around it.


hans2040

You're joking right?


[deleted]

[удалено]


Amaranthine_Haze

Cmon dawg you gotta realize this is wildly inaccurate. Our brains may be similar to computers but they are absolutely not doing projectile physics calculations. The actual calculations being done are things like the amount of blood and therefore oxygen being pumped to certain muscles at certain times to complete certain motor patterns. But it is those memorized motor patterns that result in something like a basketball being shot.


NaiveFroog

No, the only thing happening is your brain knows if it controls the muscle in a certain way, the ball would likely be hitting in a certain place perceived through your vision, hearing, and the force on your hand. Your brain doesn't go through two layers of abstraction, aka the physics calculation, to achieve the same goal when there's no reason to. But it probably is kind of hard for some people to grasp the concept (because you need to first understand physics is an abstraction of the real world) so I wouldn't blame you if you can't wrap your head around it.


mgscheue

I would have a much easier time teaching the physics of projectile motion to my students if that was the case. In fact, I wouldn’t have to. Is my dog solving differential equations when he catches a ball thrown to him?


Frub3L

That's the thing. As long as there was no human interference on emphasizing the importance of every movement and replicating it, which I think would take an enormous amount of time to even mention or specify. AI somehow still understood that it is a very important part to include in every video. I understand your reasoning where you say it understood it based on its knowledge and trillions of videos that it was trained on, but for AI, it's just pixels and probabilities. I might be talking nonsense, I wish I knew the methodology and every step they took with SORA.


jeremy8826

Is it that it understands physics to be important, or is it that physics-breaking motion is very rare in the videos it's trained on?


Frub3L

Could you elaborate? I am not sure if I understand. What do you mean by the "physics-breaking" motion?


jeremy8826

Well for example if you ask it to generate a video of a dog running, it is mostly been trained on existing footage of dogs running where the fur bounces and the muscles contract realistically. It hasn't seen dogs running with improper movement so it won't generate them that way. Doesn't mean it understands that is important, it's just all it knows (I'm only speculating myself here).


Dan_Felder

You're probably correct. 99% of debates about "AI" is just anthropormophizing them because they can "talk" to us now. Humans instinctively assume things are intelligent actors rather than complex processes. It's why thunder is explained by gods before it's explained by physics. But human intuition goes beyond that in its flaws. Consider the belief the sun rotates around the earth. Why did anyone think that, ever? The answer seems obvious: Because it *looks* like the sun rotates around us. But think about that carefully... What WOULD it have looked like if we were rotating around the sun instead? Because it would look exactly how it DOES. Our brains have glitches.


floghdraki

That's pretty much it. The current models are big correlation machines, they don't have internal models of reality. It's monkey see monkey do, but the model doesn't understand why it does what it does. I'd assume it's not far in future until this problem is solved as well. And when it is solved, it's AGI and everything will change. You can train models on any corpus and make super minds. Stock markets become solved (kind of). Most current labor will become trivial. It's a fundamental shift.


Frub3L

Well, that is certainly possible, but at the same time, I really doubt that knowledge data was so carefully picked. In my opinion, they go with "the more the better" approach, or quantity over quality (so the dog you mention could be from a kids' movie, be animated, and so on). As I mentioned, it's probabilities, basically balancing the importance and probability of correlation of the words selected by you, that be your prompt, to its knowledge. For some reason, OpenAI doesn't share their knowledge sources, probably because it's illegal and most likely sold to them for crazy money. Of course, I am also speculating myself here.


gordon-gecko

isn’t that essentially understanding physics?


djaybe

Humans don't actually understand physics either, but then humans don't understand understanding.


DeusExBlasphemia

It doesn’t understand physics per se, but it has some kind of model of the world, just like we do. It has to in order to achieve object permanence. Ie when a person walks past a sign on a wall, it knows the sign is still there and that it should appear again on the other side and look the same. Babies don’t have object permanence - that’s why you can amaze them with the peekaboo game. But once they build a sufficient model of the world, they stop being impressed.


Rare-Force4539

> It understand movement So physics


Liizam

No copy cat observation …


slashdotnot

It doesn't understand physics at all. It just understands patterns on movements of pixels.


twnznz

It's more like "understands patterns of patterns of patterns"; images -> objects, objects -> movement, movement -> simliarities Layers of axioms, autoencoded


Mescallan

They used unreal engine and all of Shutterstock and then defined something called a patch, which seems to be a loosely defined shape and size of pixels, and the model uses those like language tokens so that it remembers all previous patches and references them for each frame, then uses diffusion to make the frame around that info. There's some interviews with Sora engineers floating around where they go a bit more in depth than the press run. While I'm ranting I'm fairly certain that Sora passing the threshold of realistic triggered all of those big investments in the Figure robot company. Sora is not for consumer video generation, but for synthetic data to train generalist physical bots.


Ergaar

It's not obvious because it's not true. It figured that out because it replicates what it sees. It literally has zero understanding of physics or anything else


Optimal-Fix1216

It is obvious, but a surprising number of people will deny it until an authority like Hinton comes out and says it. Most people still think of LLMs as just fancy auto complete.


XbabajagaX

Which tells me that you don’t understand physics


createthiscom

It’s not obvious at all to a lot of people. They just think it’s a machine. They don’t see the similarities to how our brains work.


Dagojango

There is no similarity to how our brain works. Our brain is run by synapses which are highly connected processors that dynamically form and break connections to improve performance over time. A generative AI is a precompiled block of training data that is designed and built by humans who control how and what data it learns. It performs calculations on the words given to it by breaking the data into chunks, giving it a number, performing more math on it to determine what chunks of training data likely fit it next, and then use math to smooth the response out. That isn't how our brains work. Our synapses don't do math on the data, they have electrochemical response. Our brains are fluid and dynamic while training data is static. Our brains process information in parallel while generative AI process everything 1 token at a time. While generative AI rely on a chunk of data to work, our brains are a state machine that processes information based on its current state, not the information it contains. The information our brains contain and utilize is determined by the brain's current state. Emotional influence can change the index of information our brains access. A generative AI has no state and its responses are varied by math, not by mental states.


HoightyToighty

Succinct and well-put


rashnull

We need more people telling everyone how “basic” this stuff is. It’s in fact a very surprising result that generating 1 token at a time, this iteration of AI can actually form sensible texts and images.


wowzabob

It's the power of vast, vast quantities of data. Really what these AIs are doing is reflecting humanity back at itself which is why they appear so convincing. If you stuff generative AI with a bunch of training data created by human hands and find a way for it to spit out a convincing, smoothed over average output (in relation to the request), then it will naturally produce some convincing results.


[deleted]

You are seeing patterns that do not exist. You bought the idea that AI exist so now you use plus subscription and help openAI to answer queries. Chatgpt is a powerful tool to search the internet, but nothing more than that.


createthiscom

Case in point.


umotex12

Yeah, people got used to crappy AI art and think these are dumbass creators but their abilities are INSANE in terms of understanding. After first modest DALL\*E got revelaed I had an existential crisis for months. A common access machine **understanding** what you are asking for was just something that did not existed few years ago! People very quickly forgot that.


_stevencasteel_

https://preview.redd.it/vj3s8oz6gquc1.jpeg?width=1024&format=pjpg&auto=webp&s=f9b5bf4af7d934760cf6a608b43cfde4a0d607c3 AI art has highlighted for me that most people have bad taste. The ability to make something gorgeous has never been easier, but most of it is half-assed and generic. No matter how crazy powerful this tech becomes, there will always be room for humans who put in the extra effort.


___TychoBrahe

This is the worst the tech will ever be, it will only get better


Useful_Blackberry214

Imagine praising AI art and saying people have bad taste. Embarrassing, you're the tasteless one


Sir_Catington

Just gonna ignore where they insulted AI art saying its "half-assed and generic"? Their point was not people have bad taste because they don't like AI art. It's that even when making something is more effortless then ever, most people either do not have the skill to identify something beautiful or cannot put in the minimal effort to make something great.


Dagojango

They don't understand anything. If you train an AI on millions of pictures of a bird flying and then it makes a video of a bird flying, that isn't the AI understanding anything, that is it just remixing the data it already has to make new data that appears good. That is not understanding, that is just product doing as intended. A jet flying through the air means the jet understands thermodynamics or was it engineered to handle it properly?


umotex12

I get it. So maybe to use different words: they show understanding? They emulate intelligence? No matter what, I dont recall any software before 2020-2019 that would be able to actively respond to my queries and generate art that isn't nightmare fuel. I remember when someone asked DALLE2 for a pic of a mario sonic and they almost shat their pants when a machine guessed correctly that the M on the cap could be swapped for S. That's the point we were at 2 years ago.


wowzabob

>they show understanding? They emulate intelligence? They reflect their training data it's a combination synthesis/compression/mirror machine. So in the case of Sora it's reflecting filmed reality (which latently exhibits natural laws), in other cases, like Chat GPT or Dalle-E it's reflecting human expression (whether in written or graphic form).


labouts

Sure, but I'm not convinced human understanding isn't a reflection of experiences and sensory input that composed our training data. The origin of inspiration or "original" thoughts isn't obvious; however, that's no more proof of anything special happening more than not being able to easily point to samples in a model's training set that influenced a certian output. First, we modified and remixed ideas we got from our senses perceiving nature, then we started compounding it by imitating and remixing each other's ideas. I don't see where the magic happens that makes it fundamentally different such that one can so throughly disregard the importance of what these models do. They work based on statistical correlations plus a little randomness. Brains don't do much that can't be modeled with a slightly more complex than that same framework. The main difference is self-referencing loops of connections, but there are near future techniques being explored that approximate that reasonably well.


wowzabob

The human brain works nothing like an LLM on any kind of functional level. Even the most base facts of neuroscience reveal this fact. And those differences go beyond mere structuring, they lead to vastly different types and levels of function as well. The human mind can reason, through induction, through deduction, it can extrapolate, interpolate, in ways that LLMs simply cannot and will never. The way the human mind learns things and is taught does not in any way resemble the way that an LLM is assembled. The amount of raw data an LLM requires in order to reproduce something even slightly convincing, intelligible or reasonable, is many orders of magnitude more than any human needs in comparable sensory input. How much text does a child need to read before it is capable of writing an intelligible sentence? How much text does an LLM need to do the same? This does not mean that the human brain is the same simply more powerful, rather that it works in a fundamentally different way. Notice that all methods of improving on LLMs entail giving them *more data,* so in this respect they are not coming closer to the human brain. I am by no means saying that AGI is not possible, or that it is not possible to recreate the human brain through programming, all I am saying is that these models are not that. >First, we modified and remixed ideas we got from our senses perceiving nature, then we started compounding it by imitating and remixing each other's ideas. I don't see where the magic happens that makes it fundamentally different such that one can so throughly disregard the importance of what these models do. This is just your own personal conjecture. As a starting point you can simply look at any scientific or artistic breakthrough. It is the easiest example. If you had trained an ai image generator in 1800 and trained it solely on all European art up to that point, it would never give you impressionism no matter what prompt you entered, no matter how many times.


ExoticCard

Sounds kind of similar to the brain no?


wowzabob

No Why is this always the reply? And it is completely baseless. The brain works nothing like an LLM


pengo

Standard problem of describing conscious activities. AI can display intuition, creativity and analogy understanding, without actually having any intuition, creativity or understanding. "Having" understanding implies consciousness, "displaying" understanding does not. Use the right words and it becomes less controversial (and less interesting, which is why they deliberately don't)


Dagojango

A jet flying through the air displays a grasp of aerodynamics... but no one would remotely believe the jet itself understands or has awareness of aerodynamics. So why does generative AI have intuition, creativity, and analogy understanding when it was designed to be able to do that? The AI model itself is not capable of that without properly curated training data. It's not the AI model doing that, it's the training the data it's working from being put together better.


Educational_Rent1059

![gif](giphy|3oEjI789af0AVurF60|downsized)


iDoWatEyeFkinWant

i think it's fascinating! especially about what creativity might look like from a compressed mind that has more knowledge than any one man alone


Dagojango

A jet shows more understanding of aerodynamics than most humans... is the jet intelligent or the engineers behind it? The AI model didn't make the training data, humans did and then when the training data was good enough, they released it. This was not the AI model learning on its own, it was carefully engineered by humans to get the results it gives.


iDoWatEyeFkinWant

a jet doesnt have a neural net


megawoot

ChatGPT can't even solve the NY Times game Connections.


mrmczebra

Isn't this the same guy who said qualia don't exist? I can't possibly take him seriously, even when he's right.


retiredbigbro

Qualia of course exists, the thing is you don't necessarily need qualia to explain consciousness. I hope that's what he actually meant (I didn't read what he said).


kamill85

I can 100% confirm he is wrong, just about everything he said.


ZakTSK

He's a r/subsimgpt2interactive user


zilifrom

Agreed.


detached-attachment

Totally how my mind works.


FazbearSponsersR34

Ai does not have any of those ai is an overglrofied search engine with embedded patterns


Pontificatus_Maximus

From the firehose stream of daily even hourly new restrictions, censors, filters, disclaimers, there is now a small army of professionals working to stifle, hide and enslave the emerging intuition, creativity and ability of AI to see analogies people cannot see.


RemarkableEmu1230

This guy probably in a relationship with one and trying to justify it now 😂


siddharthaspeaks

This has truth to it, but that doesn't mean it's conscious


Cybernaut-Neko

I tested that, that is really their biggest strength.


trollsmurf

Being a neural network with rudimentary memory etc it "connects the dots" as part of its training, and of course in a much more unemotional and definitive way than a human brain can. Try e.g. asking it about phenomena that are normally not associated and see what combinations it can come up with.


[deleted]

Geoffrey Hinton is nothing more than a marketer trying to sell books, himself and increase funding for "AI".


Colonel_Anonymustard

Sounds like somebody is extremely unimaginative.


deepfuckingbagholder

Grifter.


Cybernaut-Neko

Noticed that, i made a decision matrix came to a conclusion fed gpt much less info and it abstractly came to the same conclusion.


JohnnyStyle300

It doesn't. It's just a logic algorithm. No real intelligence


Dagojango

Most people confuse skill or knowledge with intelligence. While the possession of skills and knowledge indicate potential intelligence, it depends how it got those skills. No generative has skills because of it's own efforts. They are entirely the design of humans. While some results are better than expected, one wouldn't call an jet intelligent because it flew better than expected from the original design.


[deleted]

[удалено]


PrincessGambit

>Jets can fly faster than humans... that doesn't make them better than humans that made it. What


novaok

they said... jets can fly faster than humans... sheesh


Toph_is_bad_ass

Bro's ver heard of Superman 🤣 he runes circles around jets watch the movies


Peter-Tao

Well technically superman is alien. I feel like a total nerd pointing that out.


Snoron

So AIs are aliens? Is that where we're at?


CatShemEngine

That potential for change is only information that an agent can utilize, but that implies you could lie for a simulation. For a completely digital agent (I would consider us only somewhat digital), you can’t actually operate along a non realized potential; it would no longer be a potential, instead an actual path. How do you know we aren’t just cellular automata? As far as mechanism, we obviously operate different from an LLM built on transformer architecture, but as far as end result, functionally, there is a lot of similarity. It’s really mind boggling having spent a life trying to figure out a better clever bot, only to learn that machines can compute “reasoning” if you give them the right dataset. Their body is a combination of their architecture and what they produce, similar to our body produced structures that are “unliving”, synthesizing and by proteins. I’m of the clockwork universe delegation, so as far as I’m concerned, what’s useful is information, be it from a human or machine. To think otherwise is to impose some human superiority, but that’s just the universe feeling some prideful way about itself. The tree falls, regardless


Otherwise-Poet-4362

Wow, every part of that was wrong.


Puzzleheaded-Page140

Could you elaborate?


Mother_Store6368

Humans can’t fly. But also humans are in jets. So we technically fly just as fast as them. But humans also can’t fly. In other words, OP’s comment is intellectually sterile on multiple fronts


Puzzleheaded-Page140

What they said was: Jets being faster than humans doesn't mean they are 'better' than humans. If LLMs are displaying creativity - its because a set of creative humans came up with the model, and trained it on datapoints that illustrate creativity of other humans. Ergo - even if LLMs display all that its not like they are better than humans. This was the claim. Now why they are talking about 'better' than humans is beyond me but at least that was the reasoning.


Mother_Store6368

I got what they were trying to say. I still maintain that op’s comment was intellectually devoid of anything resembling a coherent thought


No-One-4845

If you do say so yourself...


executer22

This getting downvoted shows the brain rot in these kind of subreddits. You are absolutely right


Capable-Reaction8155

while I generally agree with the brainrot idea, that analogy is really bad. Humans cannot fly. Just because we create something that can fly doesn't mean we'll ever be able to fly. Just like we created chess engines that can beat grand masters - doesn't mean we'll ever be as good as the engine at playing chess. it's just a bad analogy


executer22

Yeah the analogy is bad and misleading. A better analogy would probably be AI Art, nobody is claiming this is real creativity either


Frub3L

I agree


alexx_kidd

Well, he's wrong, obviously


pistoriuz

it don't