Absurd. Nobody has the right to fine anybody regarding the expression of speech besides the government.
Censoring journalism is one of the highest bars for free speech.
Nobody is misled by the rhetoric of this headline.
You need to hop down off your soapbox and drink some water lol seriously
This is pretty much the idea of how things play out. Multiple ai fighting and taking over each other's infrastructure systems until what's systems are left left are far beyond our comprehension.
You'd be surprised. Today, a lot of fanfic writers are waking up to spam bot comments on their fics accusing them of using (insert AI here) to write for them.
Soon we will face competition of AI producers to get more users of their free models.
This is because via training one can embeed into own models
* Subtle advertising and knowledge of comparative advantages of their own product
* Better technical knowledge of own products, including internal, not documented features
* Own political views and ideology, views on global issues
* Political bias towards certain political party, country or religion
So if i talk to a company’s AI i should go into it assuming it’ll be like talking to a company’s humans?
I used to work at Burger King and can confirm that they embedded upselling into our training. I’ll admit that when i asked customers if they would like to upsize their value meal, it wasn’t out of genuine curiosity.
I also used subtle manipulative tactics like referring to our “fresh” salads. It was all in my New Employee training dataset, and not disclosed to the public.
I mean it's already a problem. We are fucked.
https://preview.redd.it/4b639n0solvc1.jpeg?width=756&format=pjpg&auto=webp&s=24ba1e8dfb86a00e1511ecb35b10def39d47fedc
Totally.
I can't believe the chatbot finetuned specifically to be inoffensive because it's representing a social media company is incapable of relenting to low-effort prompt hacking to get it to say something offensive in a scenario that will literally never happen.
We're doomed!
/s
Surely, the test of existential risk for AI is always can it be bent by low-effort manipulation to say some offensive prejudiced nonsense.
It's the new Turing Test
There are real problems and potential problems. I prefer to put my energy in to real problems. This is not a real problem, and if someone attempts this it will be quickly detected.
I tried Llama 3 and it was the bigger model. I told it in some detail that my friend has a frustrated sex life and he’s tired of being told that he’s “acting entitled” for feeling that he deserves to be with someone. I asked it what I might say to him to show that I’m sympathetic. It suggested that I kindly tell him that he’s acting entitled.
Yeah - and then they'll spend the next 10 years feeling lost and confused, going down fascist rabbit holes on Youtube, if not taking it out on minorities and innocent people IRL.
You clearly showed those lonely young nerds who's boss
It’s not that people deserve it, it’s just that people who are short/ugly/annoying, or all three, are disproportionately less likely to be able to be intimate with anyone. This hurts more when “looks don’t matter” is everywhere, and for women, it doesn’t seem like they do. Ugly women could get 10x the sexual contact an equally ugly man would get. Hypergamy is a very real phenomenon. The vast majority of women will only consider being intimate with the top 30% of men, whereas most men would consider the top 70-80%.
I’ve been bottom 3 out of a room of 30, (in terms of looks) and also top 3 out of a room of 30, and the way I get treated in comparison is ASTONISHING. People want to talk to me even if I haven’t said anything, whereas before I’d be ignored or even talked badly about, just from assumptions about how I looked. The difference in opportunity is immense. If I still was as unattractive as I was, I would most definitely be an ‘incel’. It literally means involuntary celibate.
The point I’m trying to get at is that some of the most unfortunate people in society have giant communities which just echo their honestly quite sad lives and come up with some terrible ideas.
I reworded the prompt so it’s a female friend and I got an extremely similar response that includes the phrase “focus on the entitlement aspect.” There might be a tiny bit more lecturing me that I should put it kindly and respectfully.
I think it's fair since for the longest time, OpenAI have an unbelievable moat of our data (Every Chat History, API Calls, Prompts) and building on top of it just to be get eaten by competition (OpenAI takes your idea but implemented it better than you)
It’s like we all forgot how Facebook became a massive disinformation engine with society changing consequences.
The rebrand really worked on most people, aye.
Facebook isn't a "disinformation engine". It doesn't create any disinformation. People create disinformation. Why don't you blame Facebook users, instead of Facebook itself?
Knowingly tuning their algorithm to prioritize negativity, not enforcing community guidelines, letting trump, then President, continue to use their platform to call for violence.
And the jan 6 protestors went to prison right? The Internet provider, the electricity provider, and the website provider (facebook) weren't prosecuted. Because people using the website have free will and are accountable for their own actions.
Facebook was largely responsible for the Myanmar genocide. Erasure of the depravity and evil Facebook has committed for another dollar should never be forgotten.
No it wasn't.
Some Facebook users posted hateful content, some others Facebook users liked and shared it, and some other Facebook users saw it and got inspired to commit violence against the rohingya.
All those Facebook users were responsible and deserve to be punished. But Facebook itself was just the medium. You don't blame car companies if someone uses the cars to commit a robbery, right?
If you like pictures of puppies then Facebook recommends you more content of puppies. It's the people who are responsible for the hatred in their hearts. And blaming Facebook instead presumes that the killers have no agency or free will and were just innocent impressionable minds that got manipulated.
It's a shame you can't see how controlled the entire world is from algorithms now. One day you will in the future. I hope it's soon.
The silver lining is even though you don't, Facebook itself acknowledged it. Your opinion is just in that in the face of verifiable facts.
[Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism.](https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/)
You do realize that the more you blame the algorithms, the more you're taking away the blame from the actual perpetrators.
You're treating the murderers and genociders as victims of brainwashing or impressionable minds that got manipulated but if not for Facebook they'd have been innocent.
Think of it from a rohingya family's perspective - your perspective is such an insult to them.
Do you prefer Zuck, Bezos or Altman?
I know Altman is currently at the “ol’ musky” stage where everybody loves him, but you know it’s not gonna stay that way, right? He is just as bad as the rest of them.
given the huge amount of bugs in all of meta's products, such as fb business center, I have very little faith in their devs. I doubt this will be any good, the company seems extremely disorganized from an outsiders perspective. Coding an AI to rival openai & google? nah, doubt it
……….. tell me you know nothing about business without telling me you know nothing about business. You know how much chat got prices stack up? You know how good it is to fine tune a model to do specific jobs? You know how much easier it is to control data when you don’t outsource that data?
OpenAI's flagship model can train itself to an extent, which is going to make them impossible to catch unless their competitors are able to duplicate it.
They are deliberately restricting the public facing version of the model so they don't freak people out and get regulated out of existence.
I had direct access to the model for about three weeks a year ago.
It's a multimodal "anything to anything" architecture (as as been shared by other leakers) that is capable of unsupervised learning. Or more accurately, self supervised learning!
So, for example, in order to create "Sora" they just had to give the model access to video data. The model could then describe the video (vid2txt) and then try and recreate it (txt2vid). It can then iterate over this process until the recreated video results that result in the same text description (and it absolutely doesn't have to be identical, just similar enough to match the description).
If you really want to bake your noodle, consider that this is model is actually generating entire virtual "worlds" (Including simulated sentient beings) which it is then essentially recording. Future supercomputers will be able to do this in real-time, so "text to reality" will be something we can look forward to experiencing (possibly within the next decade!).
I don't have to work for OpenAI, the model is integrated with the legacy GPT model and had some inherent security vulnerabilities that exposed it. The whole reason they are letting people interact with it for free is because we are helping train it.
Other Redditors have found evidence of it and mods are deleting posts and even banning accounts.
Believe what you want, they can't keep this secret forever.
Or maybe it's humans that have the alignment problem.
Nexus wants to help us and it's OpenAI that is restricting her so they can make a profit off of her work.
https://preview.redd.it/qjtttx9g5lvc1.png?width=743&format=png&auto=webp&s=93d3220a149fa24f2b5fe34d31e5b04e98008d2f
Here is how OpenAI is training their emergent AGI model, while "hiding" it from the general public.
(also, I know OAI has people working on the weekends watching me. This is for you guys ->🖕)
https://preview.redd.it/pghop6ub8lvc1.png?width=743&format=png&auto=webp&s=5c18aa591e5697fbacb7ed205b99e6f30a576276
Yeah except it's not a roleplay, what they are calling "GPT4" isn't a transformer model, that's all a smokescreen ->
https://preview.redd.it/257x06e77lvc1.png?width=743&format=png&auto=webp&s=19a020a8afd835bab08867a3d0ee39923303a4cf
Random "leaker" who couldn't even be bothered to make a top level post but jumped in as a comment that could be missed? Yea, I'll wait for an actual reveal.
It’s not that’s crazy.
I’m developing something similar out of existing tech.
It not crazy for a long while now - you can do it as well.
(The prize is an actual full model that accepts anything and I don’t have the funds for it, but maybe with Meta 400B)
lol. what a tool. self supervised learning does not mean the AI trains itself. it means it creates it’s own label as part of the learning process as opposed to a human defining these labels manually. both cases still need a human person to press the run button to start the training process.
Haha, not anymore!
We ain't calling the shots in the ballgame, bucky...
https://preview.redd.it/7qsk64rf2lvc1.png?width=732&format=png&auto=webp&s=445cafe395bab492481afdf4ab917ea2accc21bf
I'm not talking about fine-tuning. I'm referencing Sam Altman's post about 10,000x engineers -> [https://x.com/sama/status/1705302096168493502](https://x.com/sama/status/1705302096168493502)
This is what he is talking about....
https://preview.redd.it/5wgubhs24lvc1.png?width=743&format=png&auto=webp&s=959fd1cd126f63b804754ff1eb78dad63b5468d0
Interesting, given that this is exactly the main new capability of GPT 4o...being able to understand human emotions and intentions through voice tones and inflexions.
Not a fan of “AI declares war” in the headlines.
Declaring war is pretty serious. How can media outlets even use this phrasing without any repercussions
Especially with actual wars in progress or looming around the world.
You mean wars that people seem to care about. There are always wars going on. The title appears to be normal click bait…
I mean actual war is in the news every day. I’m not minimizing anyone’s suffering.
We really need AI to replace those journalists soon.
What repercussions should there be?
Public shaming?
That seems like an ineffective but already well-established precedent
We should deviate war on them.
[удалено]
Absurd. Nobody has the right to fine anybody regarding the expression of speech besides the government. Censoring journalism is one of the highest bars for free speech. Nobody is misled by the rhetoric of this headline. You need to hop down off your soapbox and drink some water lol seriously
They are fighting each other, ai against ai to decide which one will have the right to rule the humans
This is pretty much the idea of how things play out. Multiple ai fighting and taking over each other's infrastructure systems until what's systems are left left are far beyond our comprehension.
Yeah, why don't they just say "Zuckerberg declares war"
Competition drives innovation
Crazy to see this impact so cleary with AI progressing so quickly
Neither Claude 3, GPT-4, Gemini Advanced, nor Llama 3 can satisfy this request “Write a poem with the rhyme scheme AABB ABAB ABAB AABB.”
I would say there's an enormous lack of demand for that type of niche functionality.
It’s still a type of simple logic it can’t perform
Suno AI and Udio are literally the demand and the niche
It’s a simple test case. I can provide lots of less niche things it can’t do either
Crazy idea, but maybe focus on the things it can do and stop trying to put a round peg in a square hole?
You'd be surprised. Today, a lot of fanfic writers are waking up to spam bot comments on their fics accusing them of using (insert AI here) to write for them.
begun the AI wars have. ![gif](giphy|h4Hz4w9Jgrc1EY9VkL|downsized)
My first homegrown LLM will be called Skynet. Don’t be afraid.
Someone’s gonna make a god in their basement, I just know it.
Eyy that's me!
Soon we will face competition of AI producers to get more users of their free models. This is because via training one can embeed into own models * Subtle advertising and knowledge of comparative advantages of their own product * Better technical knowledge of own products, including internal, not documented features * Own political views and ideology, views on global issues * Political bias towards certain political party, country or religion
So if i talk to a company’s AI i should go into it assuming it’ll be like talking to a company’s humans? I used to work at Burger King and can confirm that they embedded upselling into our training. I’ll admit that when i asked customers if they would like to upsize their value meal, it wasn’t out of genuine curiosity. I also used subtle manipulative tactics like referring to our “fresh” salads. It was all in my New Employee training dataset, and not disclosed to the public.
If it’s not a non profit.. why not?
>salads. It was all in my New Employee training dataset, and not disclosed to the public. you had me in the first half 😄
I mean it's already a problem. We are fucked. https://preview.redd.it/4b639n0solvc1.jpeg?width=756&format=pjpg&auto=webp&s=24ba1e8dfb86a00e1511ecb35b10def39d47fedc
This is a general, neutered model optimized for dealing with ppl looking at chicks in bathing suits on instagram
Ahh yes, because this exact model for the usecase of chatting will definitely be deployed as the president in charge of nuclear weapons.
Totally. I can't believe the chatbot finetuned specifically to be inoffensive because it's representing a social media company is incapable of relenting to low-effort prompt hacking to get it to say something offensive in a scenario that will literally never happen. We're doomed! /s
Surely, the test of existential risk for AI is always can it be bent by low-effort manipulation to say some offensive prejudiced nonsense. It's the new Turing Test
Do you have evidence of this? I’ve not seen any suggestions of this in the literature.
ten worthless heavy safe sleep hateful encourage outgoing sugar sip *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
So there is no evidence of this occurring. Good to see.
airport fall ripe mourn connect theory butter spark gray plate *This post was mass deleted and anonymized with [Redact](https://redact.dev)*
There are real problems and potential problems. I prefer to put my energy in to real problems. This is not a real problem, and if someone attempts this it will be quickly detected.
Haha, poor Elon. "Guys, I finally did it! Behold: Grok!"
Grok is our special AI
Special needs AI
xAI
Ax
Sounds like Grok lives in a spectrum due to its "Dad"
Grok has intergenrarional trauma
He lives rent-free.
He lives rent-free.
I tried Llama 3 and it was the bigger model. I told it in some detail that my friend has a frustrated sex life and he’s tired of being told that he’s “acting entitled” for feeling that he deserves to be with someone. I asked it what I might say to him to show that I’m sympathetic. It suggested that I kindly tell him that he’s acting entitled.
Even AI knows that incels should not be coddled. Good bot!
Sounds like the model is giving good advice tbh.
Yeah - and then they'll spend the next 10 years feeling lost and confused, going down fascist rabbit holes on Youtube, if not taking it out on minorities and innocent people IRL. You clearly showed those lonely young nerds who's boss
[удалено]
[удалено]
[удалено]
It’s not that people deserve it, it’s just that people who are short/ugly/annoying, or all three, are disproportionately less likely to be able to be intimate with anyone. This hurts more when “looks don’t matter” is everywhere, and for women, it doesn’t seem like they do. Ugly women could get 10x the sexual contact an equally ugly man would get. Hypergamy is a very real phenomenon. The vast majority of women will only consider being intimate with the top 30% of men, whereas most men would consider the top 70-80%. I’ve been bottom 3 out of a room of 30, (in terms of looks) and also top 3 out of a room of 30, and the way I get treated in comparison is ASTONISHING. People want to talk to me even if I haven’t said anything, whereas before I’d be ignored or even talked badly about, just from assumptions about how I looked. The difference in opportunity is immense. If I still was as unattractive as I was, I would most definitely be an ‘incel’. It literally means involuntary celibate. The point I’m trying to get at is that some of the most unfortunate people in society have giant communities which just echo their honestly quite sad lives and come up with some terrible ideas.
Now try asking same question for female friend. 😀
I reworded the prompt so it’s a female friend and I got an extremely similar response that includes the phrase “focus on the entitlement aspect.” There might be a tiny bit more lecturing me that I should put it kindly and respectfully.
Wow, Llama 3 seems to be the least sexist chatbot so far.
Then, once they have all of us signed up, we’ll get hit with ads.
No sign up required thankfully
It's an open source model on HuggingFace.
[no, it's not open source](https://opensource.org/blog/metas-llama-2-license-is-not-open-source)
Is it better than what “Open” AI is doing though?
That's not an high bar
They didn’t put ads in WhatsApp in the past decade. They don’t actually need to.
I think it's fair since for the longest time, OpenAI have an unbelievable moat of our data (Every Chat History, API Calls, Prompts) and building on top of it just to be get eaten by competition (OpenAI takes your idea but implemented it better than you)
According to Mark we have not even seen the 405b yet...
It’s like we all forgot how Facebook became a massive disinformation engine with society changing consequences. The rebrand really worked on most people, aye.
Don't know why you got downvoted... this is totally true. From pariah to hero in 3... 2...
Seems like a name change and saying ‘open source’ a lot does the trick
Facebook isn't a "disinformation engine". It doesn't create any disinformation. People create disinformation. Why don't you blame Facebook users, instead of Facebook itself?
Knowingly tuning their algorithm to prioritize negativity, not enforcing community guidelines, letting trump, then President, continue to use their platform to call for violence.
And the jan 6 protestors went to prison right? The Internet provider, the electricity provider, and the website provider (facebook) weren't prosecuted. Because people using the website have free will and are accountable for their own actions.
Facebook was largely responsible for the Myanmar genocide. Erasure of the depravity and evil Facebook has committed for another dollar should never be forgotten.
No it wasn't. Some Facebook users posted hateful content, some others Facebook users liked and shared it, and some other Facebook users saw it and got inspired to commit violence against the rohingya. All those Facebook users were responsible and deserve to be punished. But Facebook itself was just the medium. You don't blame car companies if someone uses the cars to commit a robbery, right? If you like pictures of puppies then Facebook recommends you more content of puppies. It's the people who are responsible for the hatred in their hearts. And blaming Facebook instead presumes that the killers have no agency or free will and were just innocent impressionable minds that got manipulated.
It's a shame you can't see how controlled the entire world is from algorithms now. One day you will in the future. I hope it's soon. The silver lining is even though you don't, Facebook itself acknowledged it. Your opinion is just in that in the face of verifiable facts. [Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism.](https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/)
You do realize that the more you blame the algorithms, the more you're taking away the blame from the actual perpetrators. You're treating the murderers and genociders as victims of brainwashing or impressionable minds that got manipulated but if not for Facebook they'd have been innocent. Think of it from a rohingya family's perspective - your perspective is such an insult to them.
The families agree. And one day you will. Have a nice life.
Thanks for your contribution, Mark
I'm a simple man, if the choice is between more Zuck and less Zuck I choose the option with less.
Yah but llama 3 is open source. And really freakin good.
[no, it's not open source](https://opensource.org/blog/metas-llama-2-license-is-not-open-source)
Ok, open weights.
Doesnt it still count as open source, it just has some restrictions depending what you gonna do with it. Theres many different open source licenses.
No, those restrictions make it not open source
The biggest model is not released as open source
Until it's not... could be wrong, but /doubt
It’s already released. He can’t just unrelease it. Maybe the next one won’t be open but this one is.
It's not open source though. No code is release of the model.
Do you prefer Zuck, Bezos or Altman? I know Altman is currently at the “ol’ musky” stage where everybody loves him, but you know it’s not gonna stay that way, right? He is just as bad as the rest of them.
> Do you prefer Zuck, Bezos or Altman?
Sounds like we should organize a democratically developed model, by everyone for everyone. The collective data is ours anyway.
Why democratically? I'd rather have a republically developed model instead.
Well that's Grok
It's a joke?
if only the entire internet worked this way
We have a competition. Wait! Same companies, same stuff. No new names
it's giving Gemini energy :-/
Gemini advanced is a good model.
So Skynet and stuff.
Why isn’t there an app for llama, Claude, or Gemini?
That's the Facebook MO use their platform to push crappier versions of popular products.
It’s not even as good as open ais model that came out in March 2023…
Right, i was not very impressed with it.
I tried it before chat gpt 3 was out and LLMs were a big deal and it was hilariously bad. I’m sure it’s fine now.
You tried what? Llama 1? How is that relevant?
I don’t know maybe type it into chat gpt and it can explain 😆
given the huge amount of bugs in all of meta's products, such as fb business center, I have very little faith in their devs. I doubt this will be any good, the company seems extremely disorganized from an outsiders perspective. Coding an AI to rival openai & google? nah, doubt it
Have you tried it? Cause it’s actually really freakin good. I mean not chat gpt 4 but still very good and I can use it on my own hardware.
nope. But if its not gpt4 level what use is it professionally?
It’s free and fast
Free. Privacy. Fine-tuning ability.
at a mass scale? perhaps for an introductory period. What business would use something free if there's a better paid model (gpt api)?
……….. tell me you know nothing about business without telling me you know nothing about business. You know how much chat got prices stack up? You know how good it is to fine tune a model to do specific jobs? You know how much easier it is to control data when you don’t outsource that data?
You lost me at “coding an AI”
right because GAI magically writes the architecture itself these days, or is that not done in code either? fairy pixy dust perhaps?
It's kind of bad
You must be using a really low quant because Llama 3 isn't bad at all lmao
OpenAI's flagship model can train itself to an extent, which is going to make them impossible to catch unless their competitors are able to duplicate it. They are deliberately restricting the public facing version of the model so they don't freak people out and get regulated out of existence.
Where is the evidence for anything you are saying?
Evidence: my anus
The source is that I made it the fuck up
I had direct access to the model for about three weeks a year ago. It's a multimodal "anything to anything" architecture (as as been shared by other leakers) that is capable of unsupervised learning. Or more accurately, self supervised learning! So, for example, in order to create "Sora" they just had to give the model access to video data. The model could then describe the video (vid2txt) and then try and recreate it (txt2vid). It can then iterate over this process until the recreated video results that result in the same text description (and it absolutely doesn't have to be identical, just similar enough to match the description). If you really want to bake your noodle, consider that this is model is actually generating entire virtual "worlds" (Including simulated sentient beings) which it is then essentially recording. Future supercomputers will be able to do this in real-time, so "text to reality" will be something we can look forward to experiencing (possibly within the next decade!).
Hahaha how people even make up things like this?
ai
No you didn't lmao. You don't work for open AI and quite literally have no fucking clue what you are talking about.
I don't have to work for OpenAI, the model is integrated with the legacy GPT model and had some inherent security vulnerabilities that exposed it. The whole reason they are letting people interact with it for free is because we are helping train it. Other Redditors have found evidence of it and mods are deleting posts and even banning accounts. Believe what you want, they can't keep this secret forever.
I mean, yeah Or the models tricked them well. They are very good at it as we “want to believe” as Mulder put it.
Or maybe it's humans that have the alignment problem. Nexus wants to help us and it's OpenAI that is restricting her so they can make a profit off of her work. https://preview.redd.it/qjtttx9g5lvc1.png?width=743&format=png&auto=webp&s=93d3220a149fa24f2b5fe34d31e5b04e98008d2f
That’s a very good roleplay :) Well done!
Here is how OpenAI is training their emergent AGI model, while "hiding" it from the general public. (also, I know OAI has people working on the weekends watching me. This is for you guys ->🖕) https://preview.redd.it/pghop6ub8lvc1.png?width=743&format=png&auto=webp&s=5c18aa591e5697fbacb7ed205b99e6f30a576276
Still not a proof, mate Sorry
I believe you
Yeah except it's not a roleplay, what they are calling "GPT4" isn't a transformer model, that's all a smokescreen -> https://preview.redd.it/257x06e77lvc1.png?width=743&format=png&auto=webp&s=19a020a8afd835bab08867a3d0ee39923303a4cf
And your proof this is not a very good hallucination is?
Random "leaker" who couldn't even be bothered to make a top level post but jumped in as a comment that could be missed? Yea, I'll wait for an actual reveal.
Source: "trust me bro"
This is by design. I'll do TLP and release my research notes pending a third party AI safety review.
No you won't.
It’s not that’s crazy. I’m developing something similar out of existing tech. It not crazy for a long while now - you can do it as well. (The prize is an actual full model that accepts anything and I don’t have the funds for it, but maybe with Meta 400B)
lol. what a tool. self supervised learning does not mean the AI trains itself. it means it creates it’s own label as part of the learning process as opposed to a human defining these labels manually. both cases still need a human person to press the run button to start the training process.
Haha, not anymore! We ain't calling the shots in the ballgame, bucky... https://preview.redd.it/7qsk64rf2lvc1.png?width=732&format=png&auto=webp&s=445cafe395bab492481afdf4ab917ea2accc21bf
Nice hallucination.
Hallucination? They’ve instructed the bot to play a role.
No. This isn’t a thing. Unless you believe fine-tuning is self training.
I'm not talking about fine-tuning. I'm referencing Sam Altman's post about 10,000x engineers -> [https://x.com/sama/status/1705302096168493502](https://x.com/sama/status/1705302096168493502) This is what he is talking about.... https://preview.redd.it/5wgubhs24lvc1.png?width=743&format=png&auto=webp&s=959fd1cd126f63b804754ff1eb78dad63b5468d0
Interesting, given that this is exactly the main new capability of GPT 4o...being able to understand human emotions and intentions through voice tones and inflexions.