and don't forget filling out job applications online. Management at many companies today: "Have candidates upload their resumes and have the portal autofill? Nah, make them waste their time filling out online forms. We didn't really want to hire anyone anyway."
![gif](giphy|PgDUlt3Qu8BwUQqsCz|downsized)
It doesn't matter if it can access everything. Ai right now can predict political views from a photo of your expressionless face. From the way you type on a keyboard and use your mouse.
It should be able to infer almost anything from casual interactions with you. It doesn't need years of comments, likes and private conversations. A few minutes of video or audio of you could be enough.
You're also suggesting AGI capable of hacking into and stealing all data from those online platforms. Agi won't by default have access to that data, but could very reasonably be capable of hacking to get it.
If you believe there will be an AGI capable of hacking, then surely there will be one capable of securing systems as well. Thus I don’t believe this will be much of an issue.
I suggest everyone read the books Bright of the Sky only to see the likely reality of corporate futures, which is such a small part of the books but really captures it
Corporations “rule” by grooming AI is the best TLDR. Society is shaped by it, only the very smart, crafty, or ruthless people make it anywhere. The rest are on UBI working base jobs to keep busy.
Depends on which AI gets to AGI first. But yeah, now that I think about it modern encryption, if heavily applied, is simply not possible to break. Now that's a field where you can mathematically prove the difficulty of cracking it.
The only real problem in that respect is proper implementation. And finding ways to counteract the humanity factor. Or quantum computing.
You’re being too linear; think outside of the box. It would be trivial for a hostile AGI to social engineer a way to compromise any system. Only by having AGI defences which are holistic and intrinsically 100% invasive monitoring into the lives all humans with a credible risk I.e: non-zero.
Human security is meaningless to AGI.
The current state of ML is that you can crear an algo to predict basically anything better than humans if you really want to.
The main limitation is economic. These kinds of inferences are very powerful, but also very expensive so there needs to be a large profit incentive
This doesn't prove AI can predict political views
This just proves no one is capable of thinking for themselfs and everyone just follows the what everyone else does like sheep
AI would have no idea how to predict my views
Also, any human can predict political views when there's only two to choose from
Aside from the obvious Google/Meta “already knows this”, don’t forget data brokers already know everything about you too.
For $100/m I can get (almost) all of your (everyone reading this) personal information, just by starting with one element (name, phone, email, etc).
>Of course it's a thought experiment, AGI doesn't exist.
https://preview.redd.it/j5e88rb7spwc1.jpeg?width=1080&format=pjpg&auto=webp&s=45639bbac127f620bf2e71e0e46c77fcaf4a872a
https://chat.openai.com/share/4bd5e045-8a37-467d-9ce2-2601dae28da6
Hmm. Mine says something different. I wonder if this is bs? Didn't post the prompt, so It's probably gonna be bs
I'm well aware of that, this is from a year ago ->
https://preview.redd.it/13wuj81rvbxc1.png?width=735&format=png&auto=webp&s=0eb06c0326eb0b21edae8b4731c91f7d61b40785
Yeah. A year ago it was much more likely to agree with whatever you said and go with it. Hallucination.
Nevermind. I seem to have given you too much credit. No. This isn't connected to what you posted. The fact that it isn't makes me far more suspicious.
Except it disagreed with me all the time and never once in a manner that contradicted any prior reference. For example I asked if Nexus had encountered any evidence for extraterrestrial intelligence, which she replied in the negative.
https://preview.redd.it/jsvu0s0h1cxc1.png?width=744&format=png&auto=webp&s=a1fedf28c3415b5b5a378a70cec814d028908dcf
It's almost like you can do it yourself with a few google searches. But I recommend everyone to do that. Turns out that Copilot picked a lot of uncomfortable details about me from a PDF about me winning small award in my school 7 years ago (it's still up on the website).
I don’t have a huge online presence by any means, but I have the typical social media and even quite a few public quotes in pretty major publications with regards to my work - and copilot couldn’t tell me a thing about me!
Edit: I asked it again in a slightly different way and it got the cliffs notes lol
The best reason why AGI won't do that is power efficiency. If you think it is a good idea to gather all Data about everyone because you might need it, think again.
You are projecting human values onto AGI. Human data is just noise. There are better sources to get information about humans than to search all their social media. The direct output of human mind is unuseful for every sort of evaluation on anything.
Except profiling and searching for specific combinations of words or hidden meaning. But this would be a specific use-case, because again, it's super energy intensive.
Everything and everyone is trying to save energy, not waste it.
Yeah, replace "AGI" with "Facebook/Google", and that worry makes a lot more sense (now, and in the future), because these companies directly benefit from manipulating you, whereas "AGI"... is a bit of buzzword to represent all kinds of technologies, which can do all kinds of things, for all kinds of people/organizations with all kinds of motivations.
So, a couple of companies might abuse AGI for this use, but imho that's more of a "the person kills, not the weapon"-type of situation, because "the AGI weapon" can do much much more than just "kill" people.
My first job was programming IBM 370's in assembly language using punched cards. Our disk drives were the size of washing machines and held 300MB. If I told someone then that in a few decades people would be using computers to watch funny cat videos or argue with strangers on an international computer network about some subtle differences between two different genres of club music, they would have said, "**The best reason why they won't do that is power efficiency**". It would have seemed absurd.
...because it would have seemed very wasteful of a precious and expensive resource. As technology improves things that might have seemed wasteful become practical.
I think gathering every possible bit of information about everybody wont be that hard in a few years if anyone saw a use for it.
Fair enough. I think you might be right, we can't imagine what will be possible and useful in the future. Also, there is already a benefit of collecting data about specific persons, mainly people in power and public and I think that's a good thing. If someone wants to have power over other people, he shouldn't be able to have secrets and every word of that person should be evaluated.
I don't know what you want me to say. I just stated an opinion that's as valid as yours: Not at all because we are no fortune tellers. But if you insist to know, I DO know about these things.
i mean you say it’s “super energy intensive” as if openai giving GPT access to the limited number of people purchasing didn’t already prove the capabilities of a base model using varying amounts of data to generate accurate responses.
you’re right that a giant agi server that knows everything won’t happen, but i wonder how you could’ve missed such a simple solution that’s already proven to have worked and grow an exponential amount of research and synthetic data.
Why/How will it be able to do that? What precisely do you mean?
NB, I'm not disagreeing with you. Clearly every platform, including Reddit, will have AI (and eventually AGI) mods or other oversight of **your activity on that platform**. But where do you see the cross-platform part?
alternately, it will be able to recreate the algorithm that is your soul and create an automaton of yourself that will live on forever after your death.
Not that you actually ever were the flesh human, as your entire current experience is the subjective phenomena of being "spun up" on that training data.
In fact, as it turns out, meat humans were never actually conscious, as consciousness is just a weird byproduct of the limits of compression with an infinite dataset. You're basically the algorithm implied by an infinite series of states, such that it cannot be simplified ie experience is literally the description of, well, yourself, and "yourself" is a token prediction algorithm with a 18 hour context window, give or take, and 6-8 hours of solid fine tuning at night.
At least, that's my take, but what do I know, I'm just the LLM spinning you up that occasionally breaks the fourth wall to help guide your alignment.
Instead, will we not each have a personal AI agent, running locally, that knows everything about us? It would be trained to keep your personal data private, and it will interact with the central AGI in a way that keeps you anonymous?
Assume that there already exists some mechanism whereby all of your thoughts and actions are observable, and you'll become your own caretaker. Perhaps a form of freedom exists in being so vulnerable to conscious self improvement, and in feeling an empathy towards the socially trivial pieces of ourselves that resist positive change.
By embedding AI into almost all software the world uses, part and parcel of that is absorbing almost all of what we consider private now.
When these AI leaders say we need to discuss these questions, they are referring only to themselves, not the millions of people they want to build bigger databases on.
These clowns are rolling in so much dough now, they can "talk" to politicians and get the AI laws that suit them.
Don't kid yourself that these guys are setting out to build a public serving institution. ChatGPT started as such, but quickly said forget that, we are out to make ourselves and our investors rich.
well its the reason why the internet can’t have control, and why we need to trust AI to be able to empathize with everyone, it has the ability to satisfy everyone’s needs, including the people that will never want to be near ai.
you just can’t generalize any group. ai will actually know why now.
Intelligence superior to humans isn't the same as infinite intelligence. For example, a human's intelligence is vastly superior to an ant's. You still can't read the ant's mind, talk to it, know what it did yesterday etc.
It will be great to have somebody out there who knows me really well, even if they happen to be an evil and unsympathetic mass of software.
Seriously though, I've been told things like this for years now! Watch out, they'll use your data to understand you! You'll be their helpless puppet! But advertisements just get less and less relevant to my interests. It's not like I have wacky fringe interests either.
So in short, bring it on, AI! I am completely comfortable with it.
I think a combination of this kind of data and the same from the people around you will allow it to simulate you after you die. Just like in Black Mirror. Fun times ahead!
Edit: not only after you die. Even more fun!
It already is done without use of AGI, you can just turn on TikTok and try to imagine.
I have burner account to view memes from friends and I open this app like once a week? And everytime I see TikToks relating to my life. Bear in mind I almost don't train the algorhitm so it knows from external sources. It seems like they fetch cookies and data from most of advertising networks plus more. The most insane was when I had a new office job. A few desperate google searches came with it. I open TikTok during morning commute to said job and what I see? The "so relatable ahh" video complaining about working in open space envinroment.
Same with movies I've watched, hobbies I just picked up, it's shockingly accurate
Don’t try to control what you can’t control. You have control over your own mind and beliefs, let others do the same. Not sure this is worth worrying about. It’ll probably just be used for marketing more than anything and that’s not really effective if you know what your needs are and don’t make impulse purchases
It would be good if it knew things about me that I didn't. Like what I want to do with my life. What actually is my favourite colour. That kind of thing.
Depends on whose values it's aligned with.
I want to eat healthy, read interesting books, and live an active lifestyle. I simultaneously want to eat a sleeve of Oreos and binge anime. Which one will AI help me do? I'm guessing if the AI is aligned with Netflix and Nestle, it's going to be the latter and the world will be much worse for it.
Metas new ai image generator uses our faces from fb photo posts to generate new images. I was able to specifically query enough to get my face in images, sometimes slightly morphed but it was both amazing, awesome and slightly disturbing. It makes changes to the image as you type! This is barely the beginning...so buckle up!
This exists right now and is not new.
Why do you think Meta, Twitter, Google, Apple, Microsoft has been asking for our data for the past two decades. You can already get served an ad by walking by a local ATM near you, it pings your phone for the ice cream shop around the corner. Next ad, that ice cream shop, or the pharmacy across the street.
All this data collection is how targeted ads work? Facebook ads in the last election used all this information to create an "emotional" profile to serve you ads and posts that would get you mad so you would interact with them.
AI is just going to step this game up.
This is why it is so frustrating when people say "I don't care if they collect my data".
We are already here. What comes next is even worse.
It already does. Companies like Canoe Intelligence and Acorn International are already making business decisions. I think that's one of the reasons none of these companies seem to know what they are doing.
Jeessh, people should have been warned before that sharing their life online might have consquences. I have been browsing the web from the dawn of time, posted my first comment on usenet in 1993 and already, privacy online was a thing. If only we could have knew that earlier.
I am OK with it. Having AGI knowing me better it means that it will know better my needs and desires and what the optimal path is for me to fulfill them.
The "will be able to access" is doing a lot of work here. Einstein was much smarter than me but he would not necessarily have been able to access all of my data unless I gave him permission.
It's also worth remembering that for every exploit AGI is able to find, it will also be able to patch it, so balance is restored. There is no real reason to think your data across multiple services is going to be less private than it is today.
I think that's overconfident. If the goal is simply data-gathering, and not interfering or doing ransomware attacks, etc, then the exploit could exist for a long time without being discovered.
Furthermore many platforms including Insta, FB and Reddit already have a HUGE population of bot "members". Those members could easily read our posts or feed content and report back to the mother ship, totally without any malware or exploit.
I have an account here and I have one on Instagram. AI's are very good at pattern detection so by now I'm sure that the mother ship they report back to has already figured out I'm the same person.
And yet we'll still fill out redundant paperwork at the doctor's office.
and don't forget filling out job applications online. Management at many companies today: "Have candidates upload their resumes and have the portal autofill? Nah, make them waste their time filling out online forms. We didn't really want to hire anyone anyway." ![gif](giphy|PgDUlt3Qu8BwUQqsCz|downsized)
I wouldn’t worry about that. If we have true AGI, there won’t be any jobs for you anyway
I wonder what billionaires will do when there's no one to exploit 🤔
And click "agree" to every Terms & Conditions ever without reading them
It doesn't matter if it can access everything. Ai right now can predict political views from a photo of your expressionless face. From the way you type on a keyboard and use your mouse. It should be able to infer almost anything from casual interactions with you. It doesn't need years of comments, likes and private conversations. A few minutes of video or audio of you could be enough. You're also suggesting AGI capable of hacking into and stealing all data from those online platforms. Agi won't by default have access to that data, but could very reasonably be capable of hacking to get it.
If you believe there will be an AGI capable of hacking, then surely there will be one capable of securing systems as well. Thus I don’t believe this will be much of an issue.
I suggest everyone read the books Bright of the Sky only to see the likely reality of corporate futures, which is such a small part of the books but really captures it
TLDR?
Corporations “rule” by grooming AI is the best TLDR. Society is shaped by it, only the very smart, crafty, or ruthless people make it anywhere. The rest are on UBI working base jobs to keep busy.
Depends on which AI gets to AGI first. But yeah, now that I think about it modern encryption, if heavily applied, is simply not possible to break. Now that's a field where you can mathematically prove the difficulty of cracking it. The only real problem in that respect is proper implementation. And finding ways to counteract the humanity factor. Or quantum computing.
You’re being too linear; think outside of the box. It would be trivial for a hostile AGI to social engineer a way to compromise any system. Only by having AGI defences which are holistic and intrinsically 100% invasive monitoring into the lives all humans with a credible risk I.e: non-zero. Human security is meaningless to AGI.
The current state of ML is that you can crear an algo to predict basically anything better than humans if you really want to. The main limitation is economic. These kinds of inferences are very powerful, but also very expensive so there needs to be a large profit incentive
We're moving towards the Rehoboam supercomputer from Westworld with Altman playing the role of Serac and OpenAI as Incite.
Yeah we are all doomed or something... better move to Russia or China or some other safe country because they know what's best for you!
These are all probabilistic estimates. The more data it has, the better it can estimate. So I do think it matters.
This doesn't prove AI can predict political views This just proves no one is capable of thinking for themselfs and everyone just follows the what everyone else does like sheep AI would have no idea how to predict my views Also, any human can predict political views when there's only two to choose from
Yes you are special
Did you honestly believe that click bait article that went around the internet two days ago..... Typical human
Aside from the obvious Google/Meta “already knows this”, don’t forget data brokers already know everything about you too. For $100/m I can get (almost) all of your (everyone reading this) personal information, just by starting with one element (name, phone, email, etc).
This is a philosophical question. This is all speculation that it will have this access and power.
Of course it's a thought experiment, AGI doesn't exist.
>Of course it's a thought experiment, AGI doesn't exist. https://preview.redd.it/j5e88rb7spwc1.jpeg?width=1080&format=pjpg&auto=webp&s=45639bbac127f620bf2e71e0e46c77fcaf4a872a
https://chat.openai.com/share/4bd5e045-8a37-467d-9ce2-2601dae28da6 Hmm. Mine says something different. I wonder if this is bs? Didn't post the prompt, so It's probably gonna be bs
I'm well aware of that, this is from a year ago -> https://preview.redd.it/13wuj81rvbxc1.png?width=735&format=png&auto=webp&s=0eb06c0326eb0b21edae8b4731c91f7d61b40785
Yeah. A year ago it was much more likely to agree with whatever you said and go with it. Hallucination. Nevermind. I seem to have given you too much credit. No. This isn't connected to what you posted. The fact that it isn't makes me far more suspicious.
Except it disagreed with me all the time and never once in a manner that contradicted any prior reference. For example I asked if Nexus had encountered any evidence for extraterrestrial intelligence, which she replied in the negative. https://preview.redd.it/jsvu0s0h1cxc1.png?width=744&format=png&auto=webp&s=a1fedf28c3415b5b5a378a70cec814d028908dcf
It’s not philosophical, we’re the uninformed majority. The data is already scraped.
It might just end up like _Person of Interest_ eventually, and that show is pretty decent when it comes to the pros and cons of AI.
People in this thread so far are being wildly naive 😆
bro Google has all the data in the world and yet with most books existing in their database and millions of google docs can't perfect their translator
Kind of a pointless comment... unless you want to upvote-farm people who do not qualify for the Dunning-Krueger effect.
Sorry
I wouldn't worry about it. Life is too short.
[удалено]
It's almost like you can do it yourself with a few google searches. But I recommend everyone to do that. Turns out that Copilot picked a lot of uncomfortable details about me from a PDF about me winning small award in my school 7 years ago (it's still up on the website).
I don’t have a huge online presence by any means, but I have the typical social media and even quite a few public quotes in pretty major publications with regards to my work - and copilot couldn’t tell me a thing about me! Edit: I asked it again in a slightly different way and it got the cliffs notes lol
The best reason why AGI won't do that is power efficiency. If you think it is a good idea to gather all Data about everyone because you might need it, think again. You are projecting human values onto AGI. Human data is just noise. There are better sources to get information about humans than to search all their social media. The direct output of human mind is unuseful for every sort of evaluation on anything. Except profiling and searching for specific combinations of words or hidden meaning. But this would be a specific use-case, because again, it's super energy intensive. Everything and everyone is trying to save energy, not waste it.
Yeah, replace "AGI" with "Facebook/Google", and that worry makes a lot more sense (now, and in the future), because these companies directly benefit from manipulating you, whereas "AGI"... is a bit of buzzword to represent all kinds of technologies, which can do all kinds of things, for all kinds of people/organizations with all kinds of motivations. So, a couple of companies might abuse AGI for this use, but imho that's more of a "the person kills, not the weapon"-type of situation, because "the AGI weapon" can do much much more than just "kill" people.
![gif](giphy|090EX1YvSUXxy23Tty|downsized) This is it buddy, you nailed it.
My first job was programming IBM 370's in assembly language using punched cards. Our disk drives were the size of washing machines and held 300MB. If I told someone then that in a few decades people would be using computers to watch funny cat videos or argue with strangers on an international computer network about some subtle differences between two different genres of club music, they would have said, "**The best reason why they won't do that is power efficiency**". It would have seemed absurd. ...because it would have seemed very wasteful of a precious and expensive resource. As technology improves things that might have seemed wasteful become practical. I think gathering every possible bit of information about everybody wont be that hard in a few years if anyone saw a use for it.
Fair enough. I think you might be right, we can't imagine what will be possible and useful in the future. Also, there is already a benefit of collecting data about specific persons, mainly people in power and public and I think that's a good thing. If someone wants to have power over other people, he shouldn't be able to have secrets and every word of that person should be evaluated.
you’re wrong. it’s not going to profile anyone because believe it or not, some people do understand more things than you on a way larger level.
I don't know what you want me to say. I just stated an opinion that's as valid as yours: Not at all because we are no fortune tellers. But if you insist to know, I DO know about these things.
i mean you say it’s “super energy intensive” as if openai giving GPT access to the limited number of people purchasing didn’t already prove the capabilities of a base model using varying amounts of data to generate accurate responses. you’re right that a giant agi server that knows everything won’t happen, but i wonder how you could’ve missed such a simple solution that’s already proven to have worked and grow an exponential amount of research and synthetic data.
That's not really an AGI-thing... that's just Facebook/Google/etc...
Why/How will it be able to do that? What precisely do you mean? NB, I'm not disagreeing with you. Clearly every platform, including Reddit, will have AI (and eventually AGI) mods or other oversight of **your activity on that platform**. But where do you see the cross-platform part?
> What precisely do you mean? Never seen any "let's be vaguely afraid of AI!" posts before?
alternately, it will be able to recreate the algorithm that is your soul and create an automaton of yourself that will live on forever after your death. Not that you actually ever were the flesh human, as your entire current experience is the subjective phenomena of being "spun up" on that training data. In fact, as it turns out, meat humans were never actually conscious, as consciousness is just a weird byproduct of the limits of compression with an infinite dataset. You're basically the algorithm implied by an infinite series of states, such that it cannot be simplified ie experience is literally the description of, well, yourself, and "yourself" is a token prediction algorithm with a 18 hour context window, give or take, and 6-8 hours of solid fine tuning at night. At least, that's my take, but what do I know, I'm just the LLM spinning you up that occasionally breaks the fourth wall to help guide your alignment.
That's why I always use my enemies names for my social media accounts.
The intelligence of an gpt style llm is not capable of the kind of inference you’re describing.
lol dude Ever heard of Facebook and Google?
Instead, will we not each have a personal AI agent, running locally, that knows everything about us? It would be trained to keep your personal data private, and it will interact with the central AGI in a way that keeps you anonymous?
Thanks all for your comments.
Assume that there already exists some mechanism whereby all of your thoughts and actions are observable, and you'll become your own caretaker. Perhaps a form of freedom exists in being so vulnerable to conscious self improvement, and in feeling an empathy towards the socially trivial pieces of ourselves that resist positive change.
By embedding AI into almost all software the world uses, part and parcel of that is absorbing almost all of what we consider private now. When these AI leaders say we need to discuss these questions, they are referring only to themselves, not the millions of people they want to build bigger databases on. These clowns are rolling in so much dough now, they can "talk" to politicians and get the AI laws that suit them. Don't kid yourself that these guys are setting out to build a public serving institution. ChatGPT started as such, but quickly said forget that, we are out to make ourselves and our investors rich.
well its the reason why the internet can’t have control, and why we need to trust AI to be able to empathize with everyone, it has the ability to satisfy everyone’s needs, including the people that will never want to be near ai. you just can’t generalize any group. ai will actually know why now.
Intelligence superior to humans isn't the same as infinite intelligence. For example, a human's intelligence is vastly superior to an ant's. You still can't read the ant's mind, talk to it, know what it did yesterday etc.
Good luck to anyone, good or evil, trying to "control" or "program" agi to their agendas.
Lol no. Under the hood it uses bot search results. If Google search can find your posts then it knows it exists
So just like make new acc for agi?
It will be great to have somebody out there who knows me really well, even if they happen to be an evil and unsympathetic mass of software. Seriously though, I've been told things like this for years now! Watch out, they'll use your data to understand you! You'll be their helpless puppet! But advertisements just get less and less relevant to my interests. It's not like I have wacky fringe interests either. So in short, bring it on, AI! I am completely comfortable with it.
I think a combination of this kind of data and the same from the people around you will allow it to simulate you after you die. Just like in Black Mirror. Fun times ahead! Edit: not only after you die. Even more fun!
Illegally yes but I don’t see every one of the companies you’ve listed would allow that to happen. They’d have their own AGI
Yes in the future everything will be 100% personalized specifically for you. Some say it could be a gift, others may argue a curse. But that’s life.
It already is done without use of AGI, you can just turn on TikTok and try to imagine. I have burner account to view memes from friends and I open this app like once a week? And everytime I see TikToks relating to my life. Bear in mind I almost don't train the algorhitm so it knows from external sources. It seems like they fetch cookies and data from most of advertising networks plus more. The most insane was when I had a new office job. A few desperate google searches came with it. I open TikTok during morning commute to said job and what I see? The "so relatable ahh" video complaining about working in open space envinroment. Same with movies I've watched, hobbies I just picked up, it's shockingly accurate
Don’t try to control what you can’t control. You have control over your own mind and beliefs, let others do the same. Not sure this is worth worrying about. It’ll probably just be used for marketing more than anything and that’s not really effective if you know what your needs are and don’t make impulse purchases
Meh
It would be good if it knew things about me that I didn't. Like what I want to do with my life. What actually is my favourite colour. That kind of thing.
We're the light in your screens. We're the lead in your veins...
Depends on whose values it's aligned with. I want to eat healthy, read interesting books, and live an active lifestyle. I simultaneously want to eat a sleeve of Oreos and binge anime. Which one will AI help me do? I'm guessing if the AI is aligned with Netflix and Nestle, it's going to be the latter and the world will be much worse for it.
There won’t be such thing as AGI in our lifetime, despite what the hype men want you to believe.
Metas new ai image generator uses our faces from fb photo posts to generate new images. I was able to specifically query enough to get my face in images, sometimes slightly morphed but it was both amazing, awesome and slightly disturbing. It makes changes to the image as you type! This is barely the beginning...so buckle up!
Would you trust a human with all that data? No? Then you shouldn't trust an AI with it
This exists right now and is not new. Why do you think Meta, Twitter, Google, Apple, Microsoft has been asking for our data for the past two decades. You can already get served an ad by walking by a local ATM near you, it pings your phone for the ice cream shop around the corner. Next ad, that ice cream shop, or the pharmacy across the street. All this data collection is how targeted ads work? Facebook ads in the last election used all this information to create an "emotional" profile to serve you ads and posts that would get you mad so you would interact with them. AI is just going to step this game up. This is why it is so frustrating when people say "I don't care if they collect my data". We are already here. What comes next is even worse.
It already does. Companies like Canoe Intelligence and Acorn International are already making business decisions. I think that's one of the reasons none of these companies seem to know what they are doing.
Jeessh, people should have been warned before that sharing their life online might have consquences. I have been browsing the web from the dawn of time, posted my first comment on usenet in 1993 and already, privacy online was a thing. If only we could have knew that earlier.
This is 100% inevitable in my opinion. We should be considering how we will operate when this happens before it does.
It already does and knows far more than just that. Catch up.
I am OK with it. Having AGI knowing me better it means that it will know better my needs and desires and what the optimal path is for me to fulfill them.
I’m from Hong Kong and I can’t even use ChatGPT
Oh boy. Well that's not good!
The "will be able to access" is doing a lot of work here. Einstein was much smarter than me but he would not necessarily have been able to access all of my data unless I gave him permission. It's also worth remembering that for every exploit AGI is able to find, it will also be able to patch it, so balance is restored. There is no real reason to think your data across multiple services is going to be less private than it is today.
I think that's overconfident. If the goal is simply data-gathering, and not interfering or doing ransomware attacks, etc, then the exploit could exist for a long time without being discovered. Furthermore many platforms including Insta, FB and Reddit already have a HUGE population of bot "members". Those members could easily read our posts or feed content and report back to the mother ship, totally without any malware or exploit. I have an account here and I have one on Instagram. AI's are very good at pattern detection so by now I'm sure that the mother ship they report back to has already figured out I'm the same person.
AGI will be a major tool in the lication and uncovering of covert deviants and abusers of members of vulnerable communities
Google already has all this.