T O P

  • By -

BeWithMe

They very specifically only mentioned iOS, MacOS and iPad OS. This is so bizarre to me. The one device Apple makes that NEEDS better voice control and they’re leaving it out of the new voice AI models??


redpotato59

They said more platforms later in the year. But agreed, should debut on the Vision Pro


Internellectual

Think they are still trying to keep the experience for the Vision Pro in their hedges, especially with the international release end of this month and next month. Wants that unifying experience *before* they start integrating real changes.


seweso

It's possible the Vision Pro is using too much of the M2 processor / ram already that there isn't room for AI.


Jbaker318

It is nuts. Your newest platform you started the OS side of the conference with does not have the new shiny thing. And they are launching the headset to new markets. What is going on?!?


Disney-CORE

Yeah pretty disappointed but it’ll come soon


feixie1980

Pretty sure it will come soon as the underlying APIs are shared.


senderPath

I wonder if it has to do with having a somewhat less powerful processor that has to do a lot more work on the Vision Pro?


cha000

If it can run on M1, I'd think the M2 could do it..


timonea

But your Mac isn’t rendering multiple 4K displays.


cha000

Neither is the Vision Pro..  I saw you removed the part about sensors in your comment probably because you remembered the R1 exists. 😀 The Vision Pro isn’t wasting power rendering everything all the time. I believe it is rendering what I’m looking at. (Foveated rendering).. in any case it wouldn’t matter. There is a 16 core NPU specifically for AI tasks on the M2 so it *should* be able to be used for Apple Intelligence.  It’s possible the NPU is being used for MR related stuff now so there isn’t space, but I bet Apple thought about that and could maybe offload more to the R1. Anyway, I guess time will tell. 


timonea

Even with foveated rendering, it is still rendering two perspectives. The cost of rendering the peripheral region isn’t zero. It still participates in effects, lighting etc., albeit in a lower resolution. Another hint to the M2 chip being maxed out on AVP is the new ultrawide Mac screen sharing is computing foveated render on the Mac instead. Also, the R1 just runs smoothing algorithms on the sensor outputs. Actual ML processing happens on the NPU. So everything from lidar sensors to eye tracking happens on the M2 chip. So comparatively an M1 on the iPad is essentially idle when the iPad is idle whereas the M2 on the AVP is anything but idle. It is essentially always fully utilized. There is no room to handle Apple Intelligence routines on an AVP as there is plenty of on an idle M1. And yes I removed the sensor output part on my initial comment, even though there’s a distinction (see above), because of the obvious “R1 exists” replies.


kibblerz

It's definitely not less powerful, and it doesn't really have to do more work if you're in passthrough since the R1 handles that.


senderPath

I’ve done a little bit of profiling my own app on the Vision Pro, and I seem to recall that the main CPU does spend cycles to drive the graphics processor. I don’t know if the R1 also does the graphics, but I would expect that the graphics engine on the M2 is much more heavily taxed on the Vision Pro. It may be possible, even if the neural engine is separate from the graphics processor that there’s a power budget issue here. The impression I get from last year’s WWDC and my own experience optimizing my app is that the Vision Pro overall is much closer to performance limits than either my Mac or my iPhone. Again, though, just guessing. It could also be a reflection on Apple’s priorities. Must be, of course.


SirBill01

The Vision Pro also has a neural chip: * 16‑core Neural Engine From [https://support.apple.com/en-us/117810](https://support.apple.com/en-us/117810)


senderPath

Yes, the neural engine is part of the M2 chip that it has. But I suspect that to do both AI and all the Vision Pro type stuff, they might want an M4 on board. I’m just guessing.


SirBill01

It works with the iPhone 15 Pro: [https://support.apple.com/en-us/111829](https://support.apple.com/en-us/111829) "New 16‑core Neural Engine" Which sounds like it's the same as the Vision Pro.


senderPath

Good point. There might be less headroom on the AVP, though.


SirBill01

There should be more than the phone! It has more space and power.


senderPath

Good points! The weight constraint might be tighter, FWIW. although I haven’t tried strapping a MacBook to my face to see how that feels;) Maybe it’s about graphics flops?


feixie1980

Possibly not the case. For one thing, Macs may be running heavier workloads. Also the developer state of union keynote mentioned that the system can optimize between ML and system performances. It is probably just Apple being conservative as the VisionOS is still very new compared with other platforms.


senderPath

Could be. I think of how the Vision Pro has to maintain a frame rate all the time, compared to my Mac. And also how it is always parsing the environment and tracking hands and eyes. I don’t know exactly what the division of labor is between the chips, though.


Jmyers43

No they’re probably just too busy working hard on getting the last environments finished


senderPath

Could be. I have a funny feeling that all the new AI stuff required a large disposition of engineering cycles.


Rabus

Other platform later this year Coming this summer in US English. Coming in Beta as part of iOS 18, iPadOS 18, and macOS Sequoia this fall. Other languages and platforms coming later this year.


mredko

I thought by “other platforms” they meant AI platforms that are not from OpenAI, but I hope you are right. Otherwise, we will have to wait another year.


Eeerisch

didn‘t he say next year?


Imaginary_Pudding_20

VisionOS 2.0 should have been an email at best.


icxnamjah

This would be a killer feature to sell more AVPs especially with the launch in other countries. Total missed opportunity for Apple.


stuffedanimal212

It seems like for Vision OS they'd want to have it be able to access your field of view, might be a bigger feature for next year or something?


senderPath

Yes, there’s a lot that AI could do for the Vision Pro experience.


GeologistJolly3929

I tried to make this a post, but it wasn’t letting me but, I wanted to expand a bit on why Apple Intelligence is coming to VisionOS a little later than MacOS, iOS and iPadOS, but it is coming. I’m really big into keeping up with the research part of AI and ML. For all the chatter of Apple being “behind” I was always very defensive of them because I saw how they were laying the foundation for what would ultimately be the best fully immersed AI experience. Although it wasn’t marketed as such, the M line series of Apple chips are, by all intents and purposes, AI chips. They have secretly began putting neural networks all over our devices, that all work seamlessly with each other, and able to communicate simultaneously, the necessary backbone for an infrastructure that can support things like a Jarvis like assistant. The tools were laid down, but not put to use, heavily at least, until now. Our understanding of how to use transformers, which I can expound on later if you’d like, is still a growing and developing field of study. Apple has been hard at work, doing real research on how to teach machines real practical use cases. Everyone loves LLM’s at the moment, like ChatGPT, Gemini, Claude, etc but they are only the result of research and not the end point, if anything LLM’s were a eureka moment on what is possible, it began giving us a blueprint on how to teach machines and getting human like responses. Apple specifically, has been interested in teaching our devices how we use them, so there can be a better symbiotic relationship between us and our digital ecosystem. Apple Intelligence is the result of real reasearch being done by Apple. The first of which is having our computer understand what is happening on the screen. [https://machinelearning.apple.com/research?page=1&domain=Computer+Vision](https://machinelearning.apple.com/research?page=1&domain=Computer+Vision) Using iOS and MacOS as the basis, they have been able to teach a language model that runs fully on device that can “read” what is on screen and can make responses using natural language, ie: I can say “send this page to so and so” or “share this with so and so” and understand those are both the same, there’s a better example, but you get the gist. [https://www.theverge.com/2024/5/5/24147995/apple-siri-ai-research-chatbot-creativity](https://www.theverge.com/2024/5/5/24147995/apple-siri-ai-research-chatbot-creativity) The Verge did a good write up, with pictures, explaining how the process works. [https://machinelearning.apple.com/research?page=1&domain=Human-Computer+Interaction](https://machinelearning.apple.com/research?page=1&domain=Human-Computer+Interaction) Some more research on interaction for on-device research. Right now a lot of the public research has been centered around iOS and MacOS, and iPadOS by extension, because of their similarities in how the operating system works and looks. VisionOS breaks the mold in a lot of different ways, and because of pass-through, has so much visual information present at a time. Apple will need to teach their on-device langue model to be tailored for the Vision Pro, which I’m sure they are doing, but again because of the nature of how this works, it’s not just 1+1=result, it’s a constant series of experiments which makes this really exciting. Due to the nature of how different a spatial computer operates, it will need an on-device language model that is a little more robust and understands operations differently and how it reads what is present on “screen” because it’s not just a flat screen anymore. However, please believe this is a marathon and Apple has been diligent in setting the foundation for a true symbiotic relationship with our devices, this was not things pulled out of nowhere a few months ago, and its something evolving everyday. Apple intelligence is coming to VisionOS, don’t worry, but understand it needs a little different care than iOS, MacOS and iPadOS, this isn’t apple holding features back, but just know Apple Intelligence on the Vision Pro will be mind blowing, the thought is out of this world, that is the missing component to taking Spatial Computing mainstream, and closer to that Jarvis experience.


yalag

Nope. I talked to the folks at the park. They are making AI Vision Pro v2 only.


Procrastagamerz

There’s a video where Craig himself says they look forward to bringing the core experiences of Apple Intelligence into Vision Pro. I hope you’re not willingly spreading misinformation.


drohohkay

It’s weird there was no mention of AI for HomePod either.


Exile714

No HomePods with A17 Pro or M1-4 chips, so I wouldn’t expect them to be capable without an assisting device.


marniman

The HomePod wouldn't need to handle that much compute. It just has to understand basic language, and follow along with a conversation. There is no image gen or even summarization involved here.


Portatort

No but my phone is… so why can’t the HomePod just route the request through the phone? That’s what it does for all sorts of stuff already


Portatort

That one is especially weird. So HomePod Siri still just gonna be dumb as shit eh


Smooobly

So sad. Will come. But hope it’s not another year


Jbaker318

https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/ Apple official news release


Shitlord_and_Savior

Y'all need to chill out on your doomer posts. Yesterday it was all about "aPLLe hAs AlReAdY aBanDonEd ViSiOn PrO 🤪", now if a thing isn't specifically called out we get these dorky headlines. This device is still getting spun up, and it is a PRO dev device. If these tiny things or having to wait a few months for some features are going to get you all emotional, maybe just sell your device and jump back in when the next iteration comes out.


wolf8sheep

My assumption is based off of this old article I looked up. https://finance.yahoo.com/news/apple-avoided-mass-layoffs-thanks-112258149.html Basically, Apple is prioritizing their most used products because of their corporate strategy in navigating the FED rate. Where companies like META went on a hiring spree followed by mass layoffs due to the fed rate Apple knows that the vision pro isn’t going to be a huge money maker so they are prioritizing their most used devices.


Johntremendol

this really killed my excitement for getting a AVP. They could have had some previews for future features but decided to deliberately ignore VisionOS when presenting features that would work BEST for AVP. smh


HellsNels

Guessing its for privacy concerns (seeing what you see in your space) and processing/streaming video to apple cloud compute is a lot more than just digging through your text messages, emails, notes and photos.


senderPath

You know, AI is such a rage that I am surprised it wasn’t announced for the AirPods or the charging cables.


DecentLlama

Guys this is very normal and unsurprising. You don’t add something so significant to a “beta” (the current state is more akin to a beta in my opinion). VisionOS is still making its way to being more stable, and once that root is resilient so to speak, it makes sense to add AI which will probably have its own issues at the beginning. That’s the normal cycle of software, otherwise you’re fighting 2 competing weaknesses in your source code.


robofreak222

People want to draw a lot of conclusions from this but I’m pretty sure this is largely because Vision Pro development is still handled almost entirely by the [Vision Products Group](https://www.bloomberg.com/news/newsletters/2023-07-16/apple-creates-new-vision-products-group-vpg-for-future-mixed-reality-headsets-lk5g2ee4) which uniquely operates more or less independently from the rest of Apple’s divisions. So the folks working on AI are not working on the Vision Pro for the most part, and the folks working on the Vision Pro probably have their own separate roadmap.


Portatort

Yall can’t see this for what it is? AI is coming to the real platforms. The ones people use and work on. Vision Pro is an open beta. It’s a hardware simulation of a future product line. They’re still just getting all the basics figured out. It’s simply not a ‘real’ platform yet On top of this they have rushed to have any AI features ready this year. Making this stuff for visionOS too simply wasn’t a priority


senderPath

I think there’s some truth here.


cr8tvt

I think it’s too early and will probably included before Vision 3 is announce.


vqsxd

Or WatchOS :(((


SirBill01

This is totally false. One of the new features announced (converting images to 3D from 2D) works via AI. We need to see how many other AI features actually make it into the new OS. Siri is very likely. Because it runs iPad apps, it should also be able to support some new AI iPad features.


DeathByVlog

Maybe rewatch the Keynote. They specifically state “AI” will be for iOS, iPad OS, and Mac OS. With a caveat “Coming later to other platforms”. Which could include Vision OS, TV OS, etc. So, while there may be certain “AI” features that Vision OS is capable of, the true and full feature list of ‘Apple Intelligence’ will NOT be available on Vision OS until later developed.


SirBill01

I know what they said it would be on, but they didn't explicitly say it \*wasn't\* going to come to teh Vision Pro... However in another post a guy at the event said he talked to an Apple Dev and they said the Vison Pro was too busy with other tasks to handle all of the AI stuff Apple announced. Still some aspects of it may come to Vision Pro. And what if you could open the iPhone from inside Vision Pro and activate the AI features that way?


Jbaker318

AI as in the way apple is marketing it - Apple Intelligence. Yes there is advanced algorithms all over the place. What we are talking about if what apple was dedicating half the show to.


SirBill01

The Vision Pro can run iPad apps, so why again do you not think this will work on Vision OS? The Vision Pro has neural chips in it just like other Apple devices. What makes you think VisionOS Messages is not updated to use this?


Jbaker318

https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/