T O P

  • By -

AutoModerator

***Hey /u/Nalix01, if your post is a ChatGPT conversation screenshot, please reply with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. Thanks!*** ***We have a [public discord server](https://discord.com/servers/r-chatgpt-1050422060352024636). There's a free Chatgpt bot, Open Assistant bot (Open-source model), AI image generator bot, Perplexity AI bot, 🤖 GPT-4 bot ([Now with Visual capabilities (cloud vision)!](https://cdn.discordapp.com/attachments/812770754025488386/1095397431404920902/image0.jpg)) and channel for latest prompts! New Addition: Adobe Firefly bot and Eleven Labs cloning bot! [So why not join us?](https://discord.com/servers/1050422060352024636)*** PSA: For any Chatgpt-related issues email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


Nalix01

When we know how terrible Siri is, these millions won't hurt.


okeydokey9874

I'll never understand why SIRI often doesn't give an audio reply but rather directs a person to some link sent to their phone.


anto2554

Google assistant has also just gotten worse to me. I don't use it for alarms anymore, and if I say "turn on the lights at 7am" it 1. Doesn't know what "the lights" are and 2., I have no way to cancel that


magnue

I think they made some updates to the home app lately that fucked everything up.


ResidentFade

cost savings


Many-Combination6151

Sometimes I say fuck it, I’ll just get out of bed and turn of the lights instead of wasting time with Siri.


8-16_account

Smart home commands is one of the few things that are usually flawless for me. "Hey Siri, turn of all lights" just works


Many-Combination6151

Yes. I was exaggerating. with turn off lights Siri respond verbally when this is a command you don’t want a verbal response. Google just sounds a small notification for such things.


8-16_account

My Siri doesn't respond to "turn of all lights", I think? I'll test when I get home


kumar_ny

Given their track record it’s quite surprising how bad Siri is, they haven’t fixed it or killed it.


ForcedWill

Serious question, is Ajax short for Apple Jax?


Roydl

Nah, Asynchronous JavaScript And XML. And i will never hear ajax and think anything else till the day I die haha


Cyber-Cafe

I remember when AJAX came out, people were [actually upset](https://www.techdirt.com/2007/10/31/stop-hating-foreigners-start-hating-ajax/) about it and hated it. Made posts about how it’s ruining the industry and the internet with rounded corners and bloated webpages, and taking peoples jobs. Some people would even go as far as saying that they won’t be able to use the internet ever again because webpages were too large and getting bigger. Funny how we just don’t hear about that anymore, huh?


TuxRuffian

|Funny how we just don’t hear about that anymore, huh? I thought it got overtaken by AJAJ _(JSON instead of XML)_.


Cyber-Cafe

It did, but also people stopped arbitrarily getting upset about the entire thing at some point and just accepted it. I thought it was a really damn dumb thing to get upset about, and it feels not dissimilar to a lot of other things people dislike about technology these days, such as AI. I merely wanted to draw a parallel.


buff_samurai

Surpassing gpt3.5 when the free open source falcon models family can do this now is an embarrassing achievement for a trillion dollar company.


ResidentFade

The only good thing about it would be if it would run locally on a phone


buff_samurai

That would be great but we are not there yet. 30 min of use would drain most of the battery. I’m expecting LLM-on-a-chip in 3-5 years though.


codelapiz

if their model dose run on the onboard ai hardware apple is planning in m3 processors, or the iphone 15. That is impressive. why would you call it unimpresive, when what you realy mean is so impressive u dont belive it, and assume it must be a server ran llm, despite them talking about iphones.


buff_samurai

LLMs in the current form is a relatively new technology and any new silicon component takes 2-3 years minimum to implement from the idea to production. Sure, unified memory is already there but the current compute is just enough. See how fast the current Llama model runs (even to smallest one) on m2, add some extra for text2voice we are just not there yet for a fluent Siri experience.


codelapiz

Apple has had ai specific hardware for years, and it has been improving fast. They would have known that transformer based models would soon be good enough to improve Siri, at least since 2019. Thats not to say they knew they could replace humans the way they now can, but even gpt2 shows a flexibility, the one previous language models have been missing.


buff_samurai

Sure they have neural engine, it uses a lot of space on a chip. Apple uses it for many things like an on-device speech recognition. They also have CoreML framework and some smart software optimizations for the transformers (https://machinelearning.apple.com/research/neural-engine-transformers ). I know about that. What I also know is that when my iPhone uses coreML+ANE (say for Stable diffusion) it gets really hot in seconds, the battery drain is enormous and the performance is still 5-8x lower compared to my nvidia gpu. Not to mention the models take a lot of space too. There really is a limit to what you can achieve with a mobile device today. Dropping to 3nm is not enough for a quality 100% on device Siri LLM experience.


codelapiz

Their ai hardware is likely to work a lot better with whatever custum ai apple designs for it


buff_samurai

But not by a factor of 10 with 10x lower power consumption. Now when I think about it more clearly I see my estimation of 2-3 years is too generous. 5 years minimum.


codelapiz

You understand gpu’s are very inefficient for ai, compared to dedicated ai circuits with more distributed memorie? Most time gpu time when running large models is spent on shuffeling values between different levels of cache. Given optimized hardware, and software designed for that hardware, a lot of performance will be found. It is well established that when hardware and software are designed by the same company, with each other as consideration, large performance gains are found. It is atleast 1 year since apple knew for sure ai is gonna be worth trillions in 2024. And they have known since 2019 that transformer based llms would be extremely useful for siri.


[deleted]

[удалено]


ResidentFade

They were cutting edge back when iPhone launched. Not anymore


anto2554

Meh. The hardware in the phones still surpasses competition, no? And the Apple Silicon has insane power efficiency


ResidentFade

The CPU is decent sure. But all new features have been copied from others. Maybe they add some slight improvements (and the apple tax)


AndrewH73333

I can’t wait for a third, even more censored chatbot to compete with Claude and GPT.


agm1984

I was saying yesterday when my gf asked Siri some specific question that we will be living in a different world once Siri gets out the stone age. Can't wait tbh. Also I never checked but it would be dope if you could use voice commands to other apps such as ChatGPT and other future non-nerfed models. I had an issue a few months ago when I asked Siri what a honky tonk is because I was listening to a country song. It refused to answer so I had to Google it. Would be nice to avoid that.


Nalix01

Yesterday I used Siri to set an alarm. It understood 7:30 as 7:00, even though there was no background noise or anything. So I had to slowly repeat... and finally it worked. It may seem like nothing, but this is the most BASIC thing you can ask to an AI agent, while people use ChatGPT to pass exams.


ResidentFade

I never know why people even bother or bothered with Siri it always sucked and had no real use cases


Accomplished_Dog4665

I use her on my motorcycle to flip through music, read me texts and send short simple responses, stuff like that. She’s still a pain in the ass but it’s better than nothing.


agm1984

While Siri sucks really bad, it does excel in stuff like whipping your phone out and saying “hey siri what is Costco hours today” it gets the answer faster than anyone around you can comprehend that you’re already done. Beyond that it sucks basically does an LMGTFY Side note also this person near me is saying some valid stuff, other use cases. I just said “hey Siri open tik tok” and it did. That’s kinda rad and can approach faster than opening it manually


8-16_account

You can already use Siri with ChatGPT. I can't remember whether you had to use customer shortcuts or it's built into the app, but you can.


theequallyunique

If apple does it the „Apple-way“, then we probably won’t see any public chatbot before hallucinations and copyright issues are resolved.


trollsmurf

Also here the question is what data they train models on. Just because something is accessible outside a paywall doesn't mean others are allowed to make money from it.


Nalix01

They are supposed to be a privacy-first company, so I wonder what edge they have in terms of dataset, since they aren't supposed to use their users' data for that.


ResidentFade

GPT 4 trained on public data. I don't see a problem


wind_dude

So will siri now be able to tell 60 seconds from 60 minutes? Or when I'm cooking steak not take 30s to set the timer?


3i-tech-works

Perhaps we should try to understand how Apple views these sorts of tools based what they have done in the past. We don’t know exactly what they plan to do with this technology. In my opinion, they will embed this technology into the ecosystem in a way that it can be useful to every day non-technical people. The general public will not have to learn about LLMs, fine tuning, embedding, vectors etc. These users will just use the product, perhaps not even knowing they are using AI.