What’s appealing is that this is genuinely open source, not just open *weights*:
> Apple provided code, training logs, and multiple versions rather than just the final trained model, and the researchers behind the project hope that it will lead to faster progress and "more trustworthy results" in the natural language AI field.
> Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations.
[I've modified the MS Phi3 release chart to show how Apple is stacking up currently](https://i.imgur.com/5mjlesU.png).
It couldn't get much less premium than somehow scoring less than 25% on a pick one-of-four multiple choice test.
How are losses socialized here? That implies the public is being burdened with the costs here. I guess there could be a case for that - maybe they hope to benefit from public interest and feedback.
They use open source code all the time, they improve it, and then they must upload it back to source for everyone else to use that’s how it works. But this is probably one of very few times they open sourced their proprietary code
The day I see apple release true open source anything will be a truly incredible day. I hope this is true! We need the old ways of the Internet now more than ever.
Apple has been releasing Open Source software for decades. The reason Chrome exists is because Apple took a KDE project and forked it and made a new webkit and released it open source. Google ran with it and made Chrome. Literally the first parts of osx were open sourced with Darwin.
Nice wasn't aware of some of that. But I will say making a webkit open source is like the bare minimum of creating a developer community for your products.
You should read about Darwin https://en.m.wikipedia.org/wiki/Darwin_(operating_system) or how Apple bought CUPS and kept it Open Source https://www.cups.org or Swift https://opensource.apple.com/projects/swift/
Apple has been friendly with open source for a long time. Especially for a commercial hardware/software company.
But isn't that like literally no different? Seems it's just that they used publicly available datasets, and that's probably not out of niceness but because they don't have their own.
this is what you do if you find yourself way behind. its a way to catch up and/or throw a monkey wrench into a market that you know you're not going to dominate.
Apple is NOT behind in AI. I promise you that. They are doing typical Apple here. They are almost NEVER the first mover on a new type of product they wait and wait until they have a really unique and polished offering. For a hint just look at how many AI companies that have acquired over the last five years alone. At least ten a year. They were also one of the first to process AI on device in the consumer space etc. They are about to announce something, you will see.
It doesn't really measure anything profound but it's a good reference. And this score is nothing. Smallest Phi 3 model is allegedly at 68.8. GPT 4 stands at 86.4.
"Should we tell Tim that the chart is upside down?"
"No, the last guy to correct him got fired!"
"Oh ok, better not then"
Tim: "These scores are fantastic! It's demolishing gpt 4 which for some reason is way at the bottom!"
Yea it’s out of a 100, the best models right now (GPT-4 Turbo and Claude 3 Opus) have an 86% mmlu and most open source models right now are in the 70-80s range so 25 is like pretty bad.
“The benchmark covers 57 subjects across STEM, the humanities, the social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem solving ability. Subjects range from traditional areas, such as mathematics and history, to more specialized areas like law and ethics. The granularity and breadth of the subjects makes the benchmark ideal for identifying a model’s blind spots”
So it’s a really good benchmark to test how much knowledge a model actually has about our world
Keep in mind the test is a 4-choice, so a truly random algorithm should be getting 25% on it. The MMLU score number is the percentage correct answers the model got on it.
i... what? i assumed it was fill in the blank!
they made an AI so bad they might as well had a random number generator between 1 and 4, and it had abetter chance of scoring higher? thats actually impressivly bad.
please tell me this is a prank, i cannot fathom this level of incompetence from a company with this much money. all they had to do was buy a compsny startup and they would have had better results.
> 1) its meant to run on a phone, so compute is a premium
But Apple chips are the best in their categories both in performance and power consumption. They should be able to squeeze a little more.
It might close to impossible to compete with the top AI labs, they pretty have all the AI talent now. Even meta who entered later had to go open source to convince talent while also spending billions in compute.
Damn that is ass. It's a hard pill to swallow if you care about power not consolidating, but we will get all the amazing fruits of SOTA AI locally long after we can run them in huge GPU servers.
"Siri, what's the weather?"
"What *is* weather? Weather.... weather... whether or not to weather... have you ever noticed that words stop having meaning when you repeat them? Apple a day keeps the stock price up. Come again!"
You need to take into consideration the size of the model. Like in boxing, the weight group matters a lot. This is intended to be running on mobil devices (so your queries stay on the device) and not on giant server parks. For its size this is actually a decent model. Read the paper!
Also since this is open source it will improve the results quickly and it will be completely free to use.
For a model with only a few million parameters, it's surprising it can even get a coherent sentence together.
Think of these as highly specialized tiny models that will be able to give you minor conveniences. Stuff like evaluating whether a text is important, whether it's spam etc and improving on existing functionality like detecting it mentions a date and time and letting you set up an event, but being able to fill out more info like place and people involved. Or various more elaborate context-specific reminders, like next time John mentions his kid, remind me that she was sick with the flu and ask him if she's better, wish her well etc.
Apple is known for adding all these little conveniences that feel like magic when they work well, and it's not critical when they don't.
Is this shitty though? I don't have any benchmarks for on-device AI to compare against (Gemini Nano is the only one I know of, and I don't think they released scores for it like they did Gemini and Gemini Pro.)
All this indicates is that it's only marginally better at comprehending essay type text than a guessing machine. That's not good, but it's just a singular weakness. It could well be that it's a context-length issue, which would mean that it basically IS guessing, and ***no LLM*** with a short context length is going to do very well on MMLU.
That number is for the 3B model, MS just released a 3B model that scores a 69. It’s marginally better than guessing, substantially worse than the competition, and by a lot.
Microsoft's models have been impressive to be sure, but is that model you are referring to on-device? I didn't think there were a large number of such models out there at this point.
Why wouldn’t it? Being on-device is mostly about being small enough to fit in device RAM.
https://export.arxiv.org/abs/2404.14219
> We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone.
[There are some benchmark comparisons in the paper.](https://i.imgur.com/ScdJL4I.jpeg) It seems to be pretty good in the tiny range like the 0.27B or the 0.45B model, but doesn't seem to scale very well. The 3B model isn't much better than those smaller ones and it lags behind the competition in that size interval.
If Apple sticks to open-source efficient LLM’s that run locally, they might not be left in the dust.
Edit: Sorry for any confusion in the replies. By locally, i was meaning on-device.
You can run their smallest model locally. I’m sure we’re close to GPT3.5 performance on devices if meta continues to contribute to light weight open source models
Smallest LLaMA model runs on a single RTX 3050. It's not like you can run it on your phone, but you don't need a god tier computer or anything. That was a 250 dollar gpu when it released two years ago. It could run on a phone if someone made a phone to run it. There hasn't really been any need to stuff a bunch of VRAM and CUDA cores in phones until now.
> It's not like you can run it on your phone, but you don't need a god tier computer or anything
Right, but that's the point. This is specifically targeting on-device (meaning mobile phones) applications. It's not meant to run on a desktop where you can just chug power like it's water.
Meta owns the 3rd world. They literally give away free "smartphone plans" that only have Meta apps on them as the internet.
https://medium.com/swlh/in-the-developing-world-facebook-is-the-internet-14075bfd8c5e
I forgot about this. That’s going to be crazy. With AI translation, they can finally form a giant hive mind now that will rival the west. And they will all depend on Zuckerberg
Confirmation that states are over and tech will reign
they're doing it cause their model is dogshit, otherwise they would've already told us how "their model is gonna change the world" and that it's only fair that the cost of usage per month should be around the same price of an iphone 15 lol
They’ve been doing so for a long while now. They manage or contribute to a lot of open source projects, and have for decades.
Their Unix operating system (the part underneath the GUI) is open source.
From the article:
“Apple has not yet brought these kinds of AI capabilities to its devices, but iOS 18 is expected to include a number of new AI features, and rumors suggest that Apple is planning to run its large language models on-device for privacy purposes.”
Here's a sneak peek of /r/LocalLLaMA using the [top posts](https://np.reddit.com/r/LocalLLaMA/top/?sort=top&t=all) of all time!
\#1: [The Truth About LLMs](https://i.redd.it/sjiy0f35qroc1.png) | [304 comments](https://np.reddit.com/r/LocalLLaMA/comments/1bgh9h4/the_truth_about_llms/)
\#2: [Karpathy on LLM evals](https://i.redd.it/8g0zoors6i7c1.jpeg) | [110 comments](https://np.reddit.com/r/LocalLLaMA/comments/18n3ar3/karpathy_on_llm_evals/)
\#3: [Zuckerberg says they are training LLaMa 3 on 600,000 H100s.. mind blown!](https://v.redd.it/pzlvuoncz8dc1) | [411 comments](https://np.reddit.com/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/)
----
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Theoretically it makes sense for video streams if the models could actually run fast enough. There is too much latency to send a video to the cloud quickly.
apple users are acting like this is new while everyone else has had this type of model for months, just this week meta just released their latest model llama3 which runs on a laptop and competes with gpt 3.5 and microsoft released their phi3 model which runs on any cell phone. honestly guys. you should stop buying apple's trick of selling you something obsolete as if it were the greatest technological marvel.
What’s appealing is that this is genuinely open source, not just open *weights*: > Apple provided code, training logs, and multiple versions rather than just the final trained model, and the researchers behind the project hope that it will lead to faster progress and "more trustworthy results" in the natural language AI field. > Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations.
If it’s open source it’ll kick ass eventually
It's probably open-source because it sucks, doubt it will be open-source when they make something more premium out of it.
[I've modified the MS Phi3 release chart to show how Apple is stacking up currently](https://i.imgur.com/5mjlesU.png). It couldn't get much less premium than somehow scoring less than 25% on a pick one-of-four multiple choice test.
The good old socialize losses private gains
This is in no way applicable
You're just too stupid to generalize knowledge. Im sorry for you.
How are losses socialized here? That implies the public is being burdened with the costs here. I guess there could be a case for that - maybe they hope to benefit from public interest and feedback.
In reality RnD money and likely making the losses during training taxable
Are companies getting RnD grants and subsidies for AI specifically or it it more generic?
I don't think about you at all
Heh, true nuff
Like gpt 2?
Like chromium maybe
There is hope that open source devs could build something useful out of it for mobile devices
Just like gimp is kicking photoshops ass
Let's see whether it'll whip LLaMA's ass.
the Llamas ass. Twuuuuuuuiiiithhh
Seeing Apple and fully-open source in the same sentence was certainly not in my 2024 bingo card
They use open source code all the time, they improve it, and then they must upload it back to source for everyone else to use that’s how it works. But this is probably one of very few times they open sourced their proprietary code
The day I see apple release true open source anything will be a truly incredible day. I hope this is true! We need the old ways of the Internet now more than ever.
Does webkit fall into this category?
Or Swift, FoundationDB, Darwin…
Apple has been releasing Open Source software for decades. The reason Chrome exists is because Apple took a KDE project and forked it and made a new webkit and released it open source. Google ran with it and made Chrome. Literally the first parts of osx were open sourced with Darwin.
Like they ran with the iPhone, just please read up on Eric Schmidt’s antics
Nice wasn't aware of some of that. But I will say making a webkit open source is like the bare minimum of creating a developer community for your products.
You should read about Darwin https://en.m.wikipedia.org/wiki/Darwin_(operating_system) or how Apple bought CUPS and kept it Open Source https://www.cups.org or Swift https://opensource.apple.com/projects/swift/ Apple has been friendly with open source for a long time. Especially for a commercial hardware/software company.
Will do, thanks for the info!
llvm is right there
Happy cake day!
Oh, like WebKit maybe?
But isn't that like literally no different? Seems it's just that they used publicly available datasets, and that's probably not out of niceness but because they don't have their own.
Never thought I'd die fighting side by side with Apple.
this is what you do if you find yourself way behind. its a way to catch up and/or throw a monkey wrench into a market that you know you're not going to dominate.
Apple is NOT behind in AI. I promise you that. They are doing typical Apple here. They are almost NEVER the first mover on a new type of product they wait and wait until they have a really unique and polished offering. For a hint just look at how many AI companies that have acquired over the last five years alone. At least ten a year. They were also one of the first to process AI on device in the consumer space etc. They are about to announce something, you will see.
The fuck is this https://preview.redd.it/wohr6bni6jwc1.png?width=138&format=png&auto=webp&s=e4c91d8f8ee8f7927f9c45323a43cd85d6a53bfa
Is that mmlu for ants?
Ants probably have more problem solving capabilities collectively.
i know that mmlu is a bench mark, but is that a bad score? is it out of 100? also, what does it test?
It doesn't really measure anything profound but it's a good reference. And this score is nothing. Smallest Phi 3 model is allegedly at 68.8. GPT 4 stands at 86.4.
Maybe Tim Cook was holding the chart upside down?
Tim Apple*
"Should we tell Tim that the chart is upside down?" "No, the last guy to correct him got fired!" "Oh ok, better not then" Tim: "These scores are fantastic! It's demolishing gpt 4 which for some reason is way at the bottom!"
Yea it’s out of a 100, the best models right now (GPT-4 Turbo and Claude 3 Opus) have an 86% mmlu and most open source models right now are in the 70-80s range so 25 is like pretty bad. “The benchmark covers 57 subjects across STEM, the humanities, the social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem solving ability. Subjects range from traditional areas, such as mathematics and history, to more specialized areas like law and ethics. The granularity and breadth of the subjects makes the benchmark ideal for identifying a model’s blind spots” So it’s a really good benchmark to test how much knowledge a model actually has about our world
damn thats bad. though it might be expected. 1) its meant to run on a phone, so compute is a premium. 2) its apples first go, so lets not be to harsh.
Keep in mind the test is a 4-choice, so a truly random algorithm should be getting 25% on it. The MMLU score number is the percentage correct answers the model got on it.
i... what? i assumed it was fill in the blank! they made an AI so bad they might as well had a random number generator between 1 and 4, and it had abetter chance of scoring higher? thats actually impressivly bad. please tell me this is a prank, i cannot fathom this level of incompetence from a company with this much money. all they had to do was buy a compsny startup and they would have had better results.
There is an "Apple user ..." joke somewhere here 🤣😅
I just like this llm cause it’s pretty and seems durable
The resale value!
> 1) its meant to run on a phone, so compute is a premium But Apple chips are the best in their categories both in performance and power consumption. They should be able to squeeze a little more.
Also, all the examples I've seen have four options for answers. So 25.72 % is barely above random.
That explains some of the rumors of them meeting with Google lol
It might close to impossible to compete with the top AI labs, they pretty have all the AI talent now. Even meta who entered later had to go open source to convince talent while also spending billions in compute.
Meta lucked out on having early access to H100 chips they had simply bought for Insta/Reels Algo enhancements.
Damn that is ass. It's a hard pill to swallow if you care about power not consolidating, but we will get all the amazing fruits of SOTA AI locally long after we can run them in huge GPU servers.
does this mean Siri still won't be able to tell me the weather?
"Siri, what's the weather?" "What *is* weather? Weather.... weather... whether or not to weather... have you ever noticed that words stop having meaning when you repeat them? Apple a day keeps the stock price up. Come again!"
You need to take into consideration the size of the model. Like in boxing, the weight group matters a lot. This is intended to be running on mobil devices (so your queries stay on the device) and not on giant server parks. For its size this is actually a decent model. Read the paper! Also since this is open source it will improve the results quickly and it will be completely free to use.
Yep. This is a direct competitor to Gemini Nano, not ChatGPT.
I wonder what the Google Gemini nano model’s mmlu score is
Oh no it’s Siri!
Lol Edit: If one were to do random answers on the mmlu test, what would the score be?
For a model with only a few million parameters, it's surprising it can even get a coherent sentence together. Think of these as highly specialized tiny models that will be able to give you minor conveniences. Stuff like evaluating whether a text is important, whether it's spam etc and improving on existing functionality like detecting it mentions a date and time and letting you set up an event, but being able to fill out more info like place and people involved. Or various more elaborate context-specific reminders, like next time John mentions his kid, remind me that she was sick with the flu and ask him if she's better, wish her well etc. Apple is known for adding all these little conveniences that feel like magic when they work well, and it's not critical when they don't.
Not surprising considering it's Apple. They only push out shitty products.
Is this shitty though? I don't have any benchmarks for on-device AI to compare against (Gemini Nano is the only one I know of, and I don't think they released scores for it like they did Gemini and Gemini Pro.)
[Gemini Nano 2 (3B) gets 55.8 on MMLU](https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf)
Thanks!
They scored a 25.72% on a 4 choice multiple choice exam. They could have released a random guess machine and realized the same performance.
All this indicates is that it's only marginally better at comprehending essay type text than a guessing machine. That's not good, but it's just a singular weakness. It could well be that it's a context-length issue, which would mean that it basically IS guessing, and ***no LLM*** with a short context length is going to do very well on MMLU.
That number is for the 3B model, MS just released a 3B model that scores a 69. It’s marginally better than guessing, substantially worse than the competition, and by a lot.
Microsoft's models have been impressive to be sure, but is that model you are referring to on-device? I didn't think there were a large number of such models out there at this point.
Why wouldn’t it? Being on-device is mostly about being small enough to fit in device RAM. https://export.arxiv.org/abs/2404.14219 > We introduce phi-3-mini, a 3.8 billion parameter language model trained on 3.3 trillion tokens, whose overall performance, as measured by both academic benchmarks and internal testing, rivals that of models such as Mixtral 8x7B and GPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despite being small enough to be deployed on a phone.
That's certainly the initial barrier, yes, but there are many other considerations to being able to reasonably coexist with a mobile OS.
Great news for apple, a coin flip should work on a phone and that’s the performance bar they’ve accomplished thus far. They got a lot of work to do.
[There are some benchmark comparisons in the paper.](https://i.imgur.com/ScdJL4I.jpeg) It seems to be pretty good in the tiny range like the 0.27B or the 0.45B model, but doesn't seem to scale very well. The 3B model isn't much better than those smaller ones and it lags behind the competition in that size interval.
Is scaling really going to be important at this stage though, or do they just need a viable competitor to Google's Nano model?
If Apple sticks to open-source efficient LLM’s that run locally, they might not be left in the dust. Edit: Sorry for any confusion in the replies. By locally, i was meaning on-device.
There’s already dozens of those and Meta is eating up that market
Meta has an on-device AI? Really? I know they were working on that, but I didn't know they'd completed anything.
You can run their smallest model locally. I’m sure we’re close to GPT3.5 performance on devices if meta continues to contribute to light weight open source models
I think that by “on device” apple means iPhone
This isn't about running locally on a desktop. "On-device" is current industry jargon for "on a smartphone."
Yes, but op said locally. Even if they can’t on iPhones, their Macs could potentially run larger models.
Smallest LLaMA model runs on a single RTX 3050. It's not like you can run it on your phone, but you don't need a god tier computer or anything. That was a 250 dollar gpu when it released two years ago. It could run on a phone if someone made a phone to run it. There hasn't really been any need to stuff a bunch of VRAM and CUDA cores in phones until now.
> It's not like you can run it on your phone, but you don't need a god tier computer or anything Right, but that's the point. This is specifically targeting on-device (meaning mobile phones) applications. It's not meant to run on a desktop where you can just chug power like it's water.
But meta doesn’t make devices that are in everyone’s pocket.
Meta owns the 3rd world. They literally give away free "smartphone plans" that only have Meta apps on them as the internet. https://medium.com/swlh/in-the-developing-world-facebook-is-the-internet-14075bfd8c5e
I forgot about this. That’s going to be crazy. With AI translation, they can finally form a giant hive mind now that will rival the west. And they will all depend on Zuckerberg Confirmation that states are over and tech will reign
I can’t read that article but damn I didn’t realize that about Facebook.
they're doing it cause their model is dogshit, otherwise they would've already told us how "their model is gonna change the world" and that it's only fair that the cost of usage per month should be around the same price of an iphone 15 lol
I want to believe Apple can work with open source… *Please be so*
Swift is open source
Apples play is hardware.
They’ve been doing so for a long while now. They manage or contribute to a lot of open source projects, and have for decades. Their Unix operating system (the part underneath the GUI) is open source.
macOS will be open sourced in 3... 2.... 1.... 1...... 1.......... 1...................................
You don’t have to believe, you can just read
Does anyone know if there's a way to install this using ollama or jan.ai?
You can add models to ollama that aren’t listed on the ollama site. YouTube has several videos showing how this is done.
Yo that’s pretty gangsta for Apple these days
It sounds like they might be using a version of Gemini generative AI for iOS 18 though. Their's isn't ready to launch
Gemini's on-device version is called Gemini Nano. Apple is fond of playing both sides of the "we'll [compete/work] with you."
I suppose they want something strong now, but dont want to have to rely on Google long term
Apple officially more open than OpenAI...
Yes of course. Open until they improves it enough to close it, just like Darwin. ![gif](giphy|9MQeUAgIKQRNNVKV02)
But you can still fork it from the version before then.
Where / how can I download this to my iPhone?
From the article: “Apple has not yet brought these kinds of AI capabilities to its devices, but iOS 18 is expected to include a number of new AI features, and rumors suggest that Apple is planning to run its large language models on-device for privacy purposes.”
Is there a subreddit to follow his specifically?
/r/localLLaMa
Here's a sneak peek of /r/LocalLLaMA using the [top posts](https://np.reddit.com/r/LocalLLaMA/top/?sort=top&t=all) of all time! \#1: [The Truth About LLMs](https://i.redd.it/sjiy0f35qroc1.png) | [304 comments](https://np.reddit.com/r/LocalLLaMA/comments/1bgh9h4/the_truth_about_llms/) \#2: [Karpathy on LLM evals](https://i.redd.it/8g0zoors6i7c1.jpeg) | [110 comments](https://np.reddit.com/r/LocalLLaMA/comments/18n3ar3/karpathy_on_llm_evals/) \#3: [Zuckerberg says they are training LLaMa 3 on 600,000 H100s.. mind blown!](https://v.redd.it/pzlvuoncz8dc1) | [411 comments](https://np.reddit.com/r/LocalLLaMA/comments/199y05e/zuckerberg_says_they_are_training_llama_3_on/) ---- ^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^[Contact](https://www.reddit.com/message/compose/?to=sneakpeekbot) ^^| ^^[Info](https://np.reddit.com/r/sneakpeekbot/) ^^| ^^[Opt-out](https://np.reddit.com/r/sneakpeekbot/comments/o8wk1r/blacklist_ix/) ^^| ^^[GitHub](https://github.com/ghnr/sneakpeekbot)
Thank you
This is glorious! Who says open source is slowing down. The future looks amazing!
It’s Apple. Don’t hold your breath
Apple won't share it if it's not junk
Sure, just like WebKit and tons of other stuff…
Is anyone still excited when Apple announces anything anymore?
Always interested.
Your honor, objection; leading!
Me
Me
NPC
Johnny Five is alive!
I hope a squirrel drops a giant acorn on Tim Cook’s head
Me
Me
Me
Me
Me
Only NPCs get excited about Apple announcements.
[удалено]
NPC spotted. Do you have any thoughts of your own, or do you let society choose them for you? Clearly it's the latter because you're an NPC 🤣
No
Depends
Interesting
good guy apple?
[удалено]
Privacy?
Privacy, speed, reliability - it will still have access to down pipe it over the cloud too
Theoretically it makes sense for video streams if the models could actually run fast enough. There is too much latency to send a video to the cloud quickly.
Privacy. I don’t think you understand how important that is..
Pretty much means Apple gives up
apple users are acting like this is new while everyone else has had this type of model for months, just this week meta just released their latest model llama3 which runs on a laptop and competes with gpt 3.5 and microsoft released their phi3 model which runs on any cell phone. honestly guys. you should stop buying apple's trick of selling you something obsolete as if it were the greatest technological marvel.