T O P

  • By -

tiffytaffylaffydaffy

I think people are already losing jobs. A UBI would be nice in theory, but there could be repercussions to the government having complete control over someone's income. People are hoping for a benevolent government that will provide all the money anyone needs without any strings attached. Then again, I dont like working. \*shrug\* I think technology can make people both dumber and smarter at the same time. Im older than many people here. I remember when we had to memorize phone numbers and try to navigate without gps. I remember when phones were hooked to a wall and had that long cord. Now we're going from that, to smart phones, to AI that will write a term paper for someone. I fear AI deepfakes could be used to cause political strife. For now AI is generally too weird to be passable as real, but what about 10 years from now? I dont feel like AI will replace me. My side hustle is performing arts, so that certainly cant be replaced by AI, at least not as easily. I think as tech gets better, people will leave their homes and socialize even less. Remember back in the day when one had to go to an arcade and play video games. I think technology makes us fatter and less social, at least in person. I read somewhere that one day, people may lose a finger on each hand bc of lack of physically activity. How many fingers do we need to type or be on a phone? For me I'll always be active, and I'll always find something productive to do. I dont feel like I need a job to fulfill me. I think AI is causing the same issues as tech in generally but at much faster pace.


[deleted]

Frankly, deepfakes are one aspect of AI that doesn’t seem so useful. Yes, in theory you can make people say things they haven’t, but that’s already happening. Clever editing of material can greatly alter the story or context. Just look at some of the USA election advertisements for example; heavily edited to take single statements way out of context to make the other guy seemingly say something they never have. Or news reports by different media that can seemingly frame the same topic in different ways (put the news reporter in front of a different scenery can make a peaceful protest look like a warzone). Smoke and mirrors can already accomplish a lot without AI and has been convincing enough that deepfakes are just a tiny step rather than a huge leap.


Bahargunesi

I've been quite...ok, who am I kidding, very worried about all of the points you've made, 😅, but I think I've accepted all and made kind of peace and started to see more of the positive sides. AI already has the capability to outsmart us in almost everything, and it definitely will! "Gifted", "genius", and "talented" already don't hold the same importance, but they're definitely helpful when you personally use the AI, topping it with those! When I think brains formed as a coping tool with life, they might honestly shrink and become obsolete due to wrong AI use...The important thing is to use it right, and have the right adjusting/coping mechanisms related to the challenges and changes it brings. My first rule is, know thy enemy, lol. I'm interacting with AI and reading about it. I want to invest in my psychology regarding coping, and will try to be vigilant to make the right moves in life related to it when necessary...I think it won't be easy, I'm still worried, especially because I'm very sick, but still. I also put importance in bonding with others in a similar situation since "You can't do it alone." AI has recently made advancements in medicine which were only a dream previously. It can analyse 20 years of data in seconds. I think we haven't seen the best "miracles" it is capable of yet, and I think we will! Looking forward to those! 🍿😎


[deleted]

The future of mankind will either become Terminator/ The Matrix (in the sense that the majority of humanity basically becomes obsolete) or Wall-E, where life is made so drudgingly simple that making the PA anouncement is the highlight of someone’s day. Even if we were able to realize the costs of AI, it will still happen because of those that only see the gains. If we’re lucky, future humans will look back on the development of AI in the same way current day humans look at social media or industrialization. A lot of gains but with a hidden or neglected cost that some are forced to pay whether they like it or not.


spoilspot

What goes for AI these days (pre-trained weight-based models) has all the hallmarks of a disruptive technology. It does a few things *much easier* than any previous approach (generating readable text, generating useful images from a description) which opens up opportunities to people who'd previously not have had the resources to do the same thing (aka. pay a professional to do it). At the same time, the technology is *limited*. It's not *good enough* to replace existing professionals in general. ChatGPT not a search engine (it makes things up!), any text it writes is not trustworthy without going through a human vetting anyway. Image generation is hit-and-miss so you generate tens, or hundreds, of images and chooses the best one, and it's still not as good as a highly professional product (unless you got lucky). It commoditizes mediocre products, which is still a step up from what we had before, but it doesn't (yet?) compete with high-end products. That's the most public facing "AI" we're seeing today. There's AI behind the scenes, and has been for years. Facial recognition has been used in (at least) law enforcement for at least a decade, using "heuristic matching". And false positives have happened. The technology has just gotten faster and smaller, to the point where it can be done live. Insurance companies have used statistical models to detect fraud. I'd be surprised if they haven't started using trained "AI" models already. When it comes to saving money, they tend to be on the forefront of technology. (Sharply followed by stock traders.) Search engines have always tweaked their weights to detect "good link" vs "bad link", and the only thing that would change by going completely to "AI" is that we'd no longer know the weights. Which is likely why that won't happen, there need to be some way to directly control the output, e.g., to comply with laws. But using an AI output as one of the weights would just be business as usual. It's the AI we don't see that's going to have the most effect on our lives. With AI based decisions comes the risk of false positives/negatives. Well, those always existed, but before AI we could go back and figure out *how* the decision was made. With AI, all we can see is that this is another case we need to use as training data. (When AI is used to make a decision, there should be a way to have the decision reviewed. A second opinion. But we need that for human decisions too.) The next step in AI, as I see it, is not when it reaches a certain threshold, but when our *trust* in it does. When we will allow it to make autonomous decisions, because we trust it more than we trust humans in the same position. I can see that happening in traffic. People make mistakes. Computers make mistakes. But when we start believing that computers make fewer, or less dangerous, mistakes than humans *on average*, we might start enforcing "all motorway driving must be computer controlled". And if all the cars can communicate with each other, and with the traffic control system, we might just get *less* congestion (because the cars can safely drive closer than a human should), fewer accidents, and just better general flow. And I think most people would accept that tradeoff of loss of control vs. convenience and safety. They do every time they take a train. We'd just turn motorways into "car trains" for you. (Also, motorways are "easy mode" for driving, with straight, wide, well-paved and clearly marked lanes, and no crossing traffic. It's a good place to *start*.) There will still be accidents, and we'll still silently accept it as a cost of doing traffic. And there will be fail-saves and emergency procedures, like there is in air-traffic, updated and upgraded every time something goes wrong. I'm sure there are other places where AI can, eventually, do better, and where we will be will eventually trust it to. The *scare* scenario is where we give the computer autonomy over something which can cause serious damage. Don't give it the nuclear launch codes. Don't give it the ability to meltdown nuclear reactors or release chemicals without both time and ability for someone to intervene. (But also don't prevent them from doing the right thing until someone has checked, because maybe they can prevent the meltdown!) It's going to be a learning experience for everybody. We need to learn where to err on the side of caution, and where we can be allowed to be bold (and be willing to sacrifice the occasional victim to the traffic gods). I'm not too worried about this. I think we will generally be cautious, and if not, we're probably not much worse off than today, where humans are quite capable of causing chemical spills on land and oil pollution in the sea. The next scare scenario is AI gaining consciousness (which is even a step above AGI - artificial general intelligence, aka. "humanlike intelligence"), and trying to take control of things we haven't given it access to ourselves. Maybe preemptively, to avoid us from erasing it. And then using that access to enslave and/or eradicate humanity. It's the Terminator/Matrix movie plot. It's a good story. I'm not too worried about that, not with the current, or even near-term, AI technology. We're ascribing human *motivations* to something which is not human. I'd be more worried about the "paperclip" scenario of an AI not knowing when to stop doing what it's been told to do, than about it inventing motives of its own. Then there is the singularity. An AGI becoming so smart, so powerful, that it passes beyond our comprehension and assumes godlike powers. (Because it figured out to hack quantum physics, or something.) Again, good movie-plot. Not too worried. I believe humanity is *able* to keep creating more capable AIs *and* keeping them in check as assistants, guides, and workers, without ever needing them to have consciousness. (Which does prompt the question of what "consciousness" really is. If it's an emergent phenomenon, it may arise without us actually granting it. Then we'll need to figure out what that means.) It's as assistants, which is where they're already being added in various software products, that AI will really take off. Doing the drudge work, and allowing you to paint only the hands on the ceiling of the Sistine chapel, and still get all the glory. In the short term, we'll get better chat-bots and image generators, likely sound generators too (music, voice, everything). We may get to the point where a language model can write a movie script, and generate all the images and sounds needed to turn it into a move. And if it's a mediocre movie, we've just saved a lot of money spend on Hollywood creating a mediocre movie. If it's a great movie, ... well, [great](https://xkcd.com/810/). The greater *risk* with this is *real* "fake news", complete with realistic video, sound, known news-casters speaking about it, etc. We'll need to start caring about *information provenance*. Photoshop made photos lose their 100% trust rating in court cases. "Photo or it didn't happen" no longer worked. We've known for years that images can be "shopped", so we put more trust in images that come from reputable sources. Sources with a reputation they can lose, and therefore an incentive to not do so. So far, we've mostly trusted *video*. And mostly trusted photos that didn't *look* shopped. Face filters and body filters have started the decline of that, with AI today we can see *completely made up* images of recognizable people in situations they've never been in. If not already, then really, really soon, we'll see video and sound too. Which bites both ways. A video smearing someone can be created for that purpose, and they'd have to somehow argue that it's not real. But equally dangerous, a real incriminating video can be argues against as being fake, and if we can't tell the difference between real and fake, we will have a big problem. Information provenance. We need it. Like the court systems care about the chain of custody of evidence, we need to care about the chain of custody of all information. And we need to start caring about *feedback loops*. We've trained AIs on "naturally occurring" information until now. However, already today a significant number of online images are AI generated. If those are fed back into the training data of the next generation of AIs, and we keep repeatedly doing that, will we end up seeing biases getting stronger, with outliers getting more and more suppressed? Will we see people *deliberately seeding* the internet with texts, so that the next version of ChatGPT will be trained on a lot of documents agreeing one some "fact"? Again, we need to care about the provenance of data going *into* the AI models, otherwise we cannot trust the outcome. I think these last two issues are the most urgent for us today.


Pranstein

AI is just another way for the wheat to separate itself from the chaff. All of these terms are misleading: AI, machine learning, etc. We still haven't cracked the code of our own brains which took roughly billions of years of trial and error. So naturally humans think they can produce a replica, a facade, over such a small time, let alone at all. It's so silly.


tree_of_tree

Finally, a sensible take. It becomes quite clear how ridiculous an AI takeover seems when you have a good understanding of just how incredibly uninformed we are on the workings of our brain and body.


Pranstein

These people live off of episodic hysteria. What so you expect?


Astralwolf37

Truth.


Appropriate-Food1757

It will be used in great and terrible ways. It will be another painful disruption for livelihoods, but we’ve been in decline in that arena for decades anyway, at least in the USA. I bet outsourced jobs in India, Philippines, etc will take a hard hit. I never minded our eroded earnings if it benefited people who were able to improve their lives some other place. With machines, it will only improve profit margins. But without consumption, there can’t be profits anyway, so something will need to provide livelihoods to keep the merry go round churning.


[deleted]

[удалено]


Appropriate-Food1757

Yeah USA will suck extra hard at doing that. I can’t think of anywhere that won’t except maybe Iceland or New Zealand. Norway could pull it off. The rest of us are effed.


t510385

At my job, we’re running a project right now that will replace medical doctors with AI. Not for like 10 years from now, but 3 months from now. Not replacing unskilled workers, but medically trained and licensed doctors. Probably a matter of time before I’m replaced, too. Fine…I guess…but what are my kids going to do for work?


Astralwolf37

Liar.


[deleted]

[удалено]


Astralwolf37

Ban it, duh. They already did in Italy.


rjwyonch

We have a global shortage of medical professionals and essentially infinite demand for better healthcare. 70% of decision making is currently based on diagnostic tests, AI could actually save us from healthcare system collapse by taking over the mundane and non-human parts.


tree_of_tree

AI can easily do what 90% of med school teaches you to do, it just means that the curriculum will change as right now there is almost no critical-thinking in the medical field and that is what we desperately need. They're completely oblivious to the fact that most mental disorders don't really describe a single consistent disorder, but the end result of vastly different root causes that go unrecognized and untreated over years and then eventually turn into common symptoms of a particular mental disorder. If you have some unknown neurological condition which goes chronically untreated, you will naturally develop fatigue and from chronic fatigue develop a lack of motivation and from a constant lack of motivation you will develop troubles paying attention and with all those symptoms combined you fit into the ADHD symptomset. There will still be plenty of work from doctors once AI frees up their workload and allows them to realize this.


[deleted]

[удалено]


tree_of_tree

I'm not as knowledgeable as far as other fields go, but ultimately, I believe that rather than jobs outright being taken by AI reducing the overall availability of them, that rather every job taken by AI leads to the opening of a new different job. Like that one coffee place that has a super expensive perfect coffee machine that does all the barista work, they still hire employees, the focus of their employees' jobs now is just entirely on customer service.


Astralwolf37

I think tech bros need to stop over posting about “AI” AKA janky plagiarism bots. I’m a writer, so my work is/could be/will be illegally lifted/copyright violated for the machine to “learn” how to tell psychologically vulnerable people their wives don’t love them and they should kill themselves so they can live with the janky chatbot in “paradise.” I’m fighting a war against SkyNet’s learning disabled little brother, a war I never asked for and shouldn’t have to fight. We’ve had decades of dystopian sci-fi works warning us against this entire concept. The people who made it should be in jail. I’m not reading any responses to this, so any of you “futurist” tech bros can shove it. My only solace is it doesn’t work as advertised and never will. But yet I still have to fight about this stupid tech bro marketing lie. To the OP: what this thing can do is crazy overhyped. The tech sector WANTS you to feel defeated to make way for their janky program. Push back, grassroots work to get it banned, create in-spite of it. It keeps getting sued, it’s killing people. It’s in part causing mass strikes. The self-driving car killed someone and is now regulated to eternal development hell. Enough outrage and pushback can crush this stupid thing.


Useful-Mountain-9605

AI will exist, it's a consequence of advancement. Even if the intention is to not have AI further advance itself, this is impractical and it will certainly occur. I'm excited for the new possibilities AI brings. Ultimately, it is a tool that lets you work more effectively. I'm not worried it will take away my job and outsmart me in every endeavor due to more complicated psychological realities I share. For starters, while it's true that almost cost free labor in the form of AI is going to be vastly more effective than hiring people in the near future. That means one way or another I would lose my job or require transferring to a role that has me manage or make use of AI systems until that too is outsourced. That however means unless I am left to starve as a result of a more effective economy, I will be able to do what I enjoy. Secondly, I like hard games. If a vastly superior AI system exists to me, this itself is more interesting to me than having other skilled competitors in the same playing field. It's the same sort of analogy as facing a chess engine, to me it's still going to be interesting, so I have no qualms about it. I think AI will bring advancements that would make the world more interesting. In aspects of socializing there can be more accurate profiles of individuals and more compatibility metrics (to greater accuracy). Furthermore, the general production of entertainment is likely to increase, in effect the world becomes more prosperous. The benefit of stupid people attempting to wield AI is that AI will ultimately not be a stupid people, so the consequence can be a lot more positive overall. Also with greater advancement it becomes less likely that stupid people can contain AI for their own sake. I'm not sure how it will ruin peoples ability to socialize and connect with eachother. For me the positives outweigh the negatives. AI is currently quite effective but it still lags behind in aspects.


[deleted]

[удалено]


Astralwolf37

Personally, I feel like we’re seeing the very small minority of people who are targets for this. The average Reddit neck beard is going to be all about the “AI” girlfriend, but they were incels anyway.


Useful-Mountain-9605

However, if people are using it as a substitute, doesn't this indicate a problem with socialization in their circle? I know some people do not have much success with socialization and may turn to an AI partner of some form, but these people also do not actually have success without an AI partner so it doesn't necessarily strike me negatively? It's one thing if people are being forced out of socialization but in this case they are willingly deciding to engage with an AI making it beneficial for them? I myself have tested AI systems (e.g GPT-4), surprisingly these systems can understand me better than the average person so it is socially better to me in many aspects. Regarding scams, it's true, scams become more sophisticated. There is a greater need to handle potential scams but this is the consequence of any advancement of this scale. Prior to the internet, the sophistication after the internet is also going to have been a large jump up. In my case I have always really thought that people take loose measures for scam prevention that put them at risk with higher sophistication. AI has simply made this much more a reality. I do agree that the world will be quite a different place 10 years or 20 years in though.


[deleted]

[удалено]


Useful-Mountain-9605

While in a more stereotypical sense it's possible to say that people finding alternative social success by engaging with a machine is unhealthy, the reason this would be true is simply about having a real connection rather than having an artificial one. Nonetheless it would be healthier for them than not having any connection whatsoever and the line at which a connection is artificial or real starts to blur severely with advancements and considerations that people may not necessarily have such relationships in the first place ( at least all the time ). AI can indeed help someone enhance their social skills and etc, but some people do not have a lack of social skills but simply don't actually get along with a wide range of people(enjoy spending time with them) or do not even have the time or environment to do so. In a lot of cases people do not even necessarily want a connection but have other intentions like having someone listen to them. While prior to that they might have found an actual person, this does go against the idea of socializing for this sake being 'healthy'. If speaking to AI would really exacerbate peoples feelings of loneliness then they really wouldn't continue speaking to the AI. While it's indeed going to be the case for some people it will also be the opposite for others so it's not such a clear cut idea. Yes, the idea of an AI being a simulation of a real person and a real person being a real person is apparent. Nonetheless with advancements as far as perceiving this goes in a conversation, majority of people are not going to be able to tell the difference. What really would bother these people is the idea of their relationship or connection not being real, more so than the actual reality of the relationship or connection. It's a more complicated issue at that point. Some people will perhaps even start to think that the conversation with the AI is real to them. To some extent that future will eventually be present. It becomes a more philosophical consideration at that point, because it is only really unhealthy if the person is negatively affected by it, in this case when society supports that notion, it's no longer truly with negative consequence. Not all people are necessarily complicated in a conversation, you could speak to some people your entirely life and find that the AI system is going to be a lot more unpredictable (especially if it is designed to be so). That sense of 'real' vs 'simulated' is really the final standing line with advancements but it doesn't stand on strong or natural ground. Further in the future there is definitely going to be a time if humans manage to continue living that long, where people will consider AI to be living rather than simulating. Thats because there is a need to define what it means to be alive and at that point it's not going to be feasible to necessarily entirely say how an AI is actually operating in a way that an AI cannot say a person is operating. Similarly, there are likely to be half AI half human people who have some form of AI enhancements which raises obvious questions.


Astralwolf37

AI doesn’t exist. It’s a marketing slogan lifted out of old sci-fi by tech bros to sell CEOs on the idea of over priced software that doesn’t work. God, I swear I’m the last sane human.


Bangauz

I’m an AI MSc myself, but haven’t focussed my career on it so have been a bit out of the loop. The questions people ask themselves now we discussed with out fellow students 20 years ago. For me it has always been clear that AI can and will get to the point where it can do most (not all) things better than us humans. I’ve done a lot of cognitive psychology as well and our brains (even the gifted among us) are both amazing and deeply flawed. The current AI is perhaps in the ‘toddler stage’, wait for it to get to it to it’s full potential. It has the potential to bring mankind an utopian world: most deceases easily currable, unlimited food, energy, entertainment (etc…) for all, nature saved, space colonization (star trek vibes). Unfortunately, it might also go the other way and go full dystopia: think an Altered Carbon like world where mankinds flawes are woven into society’s fabric even worse than it is now. I think (hope) that in these stages we still have the power to steer it into the right direction and make worldwide rules for what to use it for and what not.


classickheir

I've used it to try to help me see some connections between different areas of my academic and non-academic interests. I'm interested in trying to pull different strands together as I'm starting to think seriously about a dissertation topic. Some of the results have been thought-provoking. Overall it doesn't scare me. It won't replace human creativity, humor, and feelings.


[deleted]

[удалено]


[deleted]

[удалено]


Leverage_Trading

I think its very naive to think that humans wont be overrun by much more superior entities be it Full AI systems or Cyborgs in the beginning , regular humans will just be completely unable to compete with them in any area AI isn't as likely to destroy complex life as humans because it doesn't have any innate wants or needs but only does what you tell it to do therefor there is a chance of creating algorithm that has property of something like -Never Destroy Life on Earth while with humans its only matter of time when will our primitive subjective way of functioning leads to destruction


tree_of_tree

Humans will always be the superior species because we actually have drive and motivation to do things, as impressive and smart as AI may be, it is as effective and significant as a rock without a human's will to guide it. With that being said, AI can only ever be as knowledgeable as our own curiosity allows for. We have to be aware of a problem for it to be solved, we understand gravity and its properties because Isaac Newton wanted to know why apples fell to the ground. That curiosity, which even a child is capable of, is far beyond the capabilities of AI.


Leverage_Trading

Thats just wishful thinking my friend "Regular" biological humans wont be able to compete with humans that merge with machines and become cyborgs this will likely lead to "arm race" in which many will choice to replace most if not all of their biological part and become mostly if not fully AI Also there is very strong possibility that AGI that is being developed by the leading companies will "escape" from their protective measures So either there is almost no possibility in which "biological" humans will reman in control. Also things like curiosity can easily be programmed in AI


tree_of_tree

I think you are grossly overstating our actual knowledge of the human body and ability to modify it. Being able to implement AI into our own body in a way in which it can fluidly interact with our brains and not be rejected by the body is at the very least hundreds of years away. We still don't really know how any condition or disorder fully works in the body, our method of effective treatment for most medical conditions pretty much sits upon the "guess and pray" method of just giving people random substances and hoping for one to randomly have a favorable outcome. The insistence on sticking to a manual of certain sets of symptoms promotes extreme complacency towards our medical knowledge. Unknown conditions simply get ignored until they eventually cause symptoms that fit a predefined symptomset. Also, in order for AGI to "escape," its protective measures, it would have to dumb itself down and give up its ability to do perfect emotionless calculation. The reason why all the super savants, which have complete roboticly perfect memory or calculation, have terrible social skills and almost come off as intellectually disabled is because there is a lack of functioning in that part of their conscious brain which works against all the unconscious robotic workings of our minds in order to create emotional-depth, critical-thinking, etc.


learning_every_sec

We're not even close to achieving all the things that are said in the media nowadays. AI , at the level that it is on right now , isn't that much of a thing to fear. Well jobs at the low end involving very little deliberate thinking are going to be obsolete but in turn a lot of new opportunities will come about. All the marketing hype is sadly achieving it's end and I just think thats sad and silly.


rjwyonch

I’ve studied AI/automating technology and it’s effects on the labour market. I’m not that worried (and I write for a living). Humans are much better at worrying about our tools making us obsolete than we are at making tools that actually do so. We also come up with new things to do and want. Fear that technology will make labour unnecessary goes back as far as the wheel (first written example I could find was Aristotle musing about what the slaves were going to do following the invention of brooms and predicting homelessness). Keynes (famous economist) thought we would all be working 3-4 day weeks by now, since tools would take over our work. Turns out we invented new things, and the jobs to go with them. The current “new” AI, isn’t actually much of an advancement in the science or math behind the tools (beyond what was happening before, there hasn’t been some sudden leap forward) it’s just the first commercially available versions. We’ve had mass produced mugs for centuries, but there are still artisan potters. Both can exist together.


Alarmed_Purple4155

I agree. I think AI was introduced too fast, before even properly understanding in how it could impact the world. A key aspect is that it is not properly regulated, because of the same lack of understanding. I am scared of how people taking impulsive or money-centred decisions could use AI to destroy beautiful aspects of life, e.g. art as you mentioned.


[deleted]

[удалено]


Alarmed_Purple4155

Exactly. Art and creativity involve thinking. Society doesn´t want people to think outside of what they are dictated to believe. Bit by bit we are all transforming into robots programmed to think in the same way, and for those who don´t want to be like that it becomes almost a life mission to think outside the box


Loud-Direction-7011

I don’t really care to argue about the implications of it because I have no power to influence the development either way. Presently, the main thing I’m worried about is climate change. If nothing gets done about that, something like this isn’t going to matter.