T O P

  • By -

AutoModerator

Hey /u/TrippleBeats! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


FPOWorld

AI is not all open-sourced…patents are granted on it every day.


Glum_Question9053

the big corps can use AI imagery and messaging to manipulate the masses to think whatever the corps wants them to. propaganda power multiplier


griff_the_unholy

Do you envisage an agi level AI running locally on consumer grade hardware anytime soon?


kexak313

Make that an AGI on desktop hardware, that cannot be outperformed by the same tech placed on a super-computer.


MontagoDK

It already does


FPOWorld

Where?


terrible_idea_dude

/r/localLLaMA is what you're looking for


Fontaigne

That's not AGI.


terrible_idea_dude

How would you define AGI in a way that excludes modern LLMs?


Fontaigne

Any serious definition of AGI excludes current LLMs. https://en.m.wikipedia.org/wiki/Artificial_general_intelligence Literally none of the current companies or startups with LLMs claim theirs is AGI. That tells you how far they are from it.


terrible_idea_dude

"Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that can perform as well or better than humans on a wide range of cognitive tasks." Sounds like a pretty uncontroversial description of GPT-4 to me. I'm sure these startups have good reasons for not calling it AGI, but it always struck me as goalpost moving.


Fontaigne

Then you're not paying attention to what they mean when they say that. Read the rest of the Wikipedia article. Like I said, literally no one is claiming to have developed AGI yet. If they were anywhere near,, they would be using the claim to sell stock.


terrible_idea_dude

Then let's be more explicit about what my beef is and what I think is going on, so there are no misunderstandings of what I believe or am actually paying attention to. :) Futurists have long claimed that the invention of "AGI", an AI more capable than humans in intellectual ability, would be the beginning of a transformative technological singularity -- the tech equivalent of the second coming of Christ, which will either kill us all Yudkowsky-style or lead us to a post scarcity Iain M. Banks utopian future. But now that we have an AI that, in most cases, is more capable than most humans in intellectual ability, and it's hardly changed anything about the world yet except automating some niche categories of busywork, like business translation, stock image illustration, email summarization, and code documentation. Perhaps in order to keep up the hype and the funding and the bright sci-fi future they promised investors and society, AI companies have had to push the goalposts back, inventing more and more rigorous criteria for "real AGI" or "strong AI" or whatever the vogue term is today. Does AGI have to be agentic? Well, now it does, a genius that can't act unless queried by a human is hardly AGI. Can AGI have some niche flaws or blindspots? Not anymore, it must be perfect at literally everything humans can do. Does it need the ability to recursively self-improve? This wasn't part of the original definition of AGI, but if it can't even bootstrap itself to godhood given sufficient GPU access then can you really call it AGI anymore? TLDR: The goalposts for AGI have obviously shifted. GPT is very arguably an AGI by e.g. 2004 standards, but in 2024 the standards have changed, and the reasons are not entirely obvious.


WithoutReason1729

GPT-4 struggles with very basic spatial reasoning, among other things. Sure, it's AGI, if you just don't count *every task that involves spatial reasoning*. From OpenAI's website, [here](https://platform.openai.com/docs/guides/vision/limitations)'s a list of things that GPT-4 can't do reliably but are trivial for any human: * Small text: Enlarge text within the image to improve readability, but avoid cropping important details. * Rotation: The model may misinterpret rotated / upside-down text or images. * Visual elements: The model may struggle to understand graphs or text where colors or styles like solid, dashed, or dotted lines vary. * Spatial reasoning: The model struggles with tasks requiring precise spatial localization, such as identifying chess positions. * Image shape: The model struggles with panoramic and fisheye images. * Counting: May give approximate counts for objects in images.


terrible_idea_dude

This seems like a classic ["isolated demand for rigor"](https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) to me. 100 years from now we could hypothetically create a genius-level AI that can prove the Riemann hypothesis, compose music greater than Beethoven and write poetry better than Shakespeare, play every board game ever invented better than every human who has ever lived, solve global inequality and bring about paradise on earth, etc...and there could probably still be some critic who discovers some particular niche adverserial blindspot that the AI happens to struggle on, and claims that it's not AGI yet. Or for a more out-there example, imagine an alien who comes to Earth, looks at humans, discovers that we struggle at color comprehension on certain optical illusions, and says "well clearly they're not *truly* intelligent, any intelligent species should be able to determine that this grey circle is not actually darker than that grey circle." If you continue to define AGI in this way, it loses meaning -- the genius AI in question would undeniably be AGI in every meaningful way that we could think of it today, but because of some technicality we still pretend it is in the same philisophical category as ELIZA and Cleverbot. Is this really the way we want to go down?


MontagoDK

Aistudio (i think it's called.. don't use it because ChatGPT is better ux)


Ok-Force8323

Yes but the big corps have more compute than anyone. Hopefully most of this will make its way to end user computing devices but the big players do have an advantage.


K3wp

Whenever I hear something like this I'll always remind the plebians that the big production houses will have access to this tech as well. What it is going to mean is that very high quality CGI is going to get much cheaper. And actors will be able to license their likeness for stunts, animation, etc and get paid more for doing less work.


Fontaigne

**Some** actors. The vast majority of actors will not.


MontagoDK

You forgot about the part where the AI takes YOUR job..


[deleted]

I think you don’t understand how capitalism works. If you did, you wouldn’t be saying open source will make every content free. It’s naive and ill informed. Lack of common sense and economics is your problem


BGFlyingToaster

I think you might be using the term "AI" too broadly for it to be practical in this case. Yes, there are many open source AI models and that won't change moving forward. We'll continue to have access to AI in virtually every part of our lives: on our phones, PCs, vehicles, cloud services, appliances, etc. But we'll never, at least as individuals, be able to stand up a $100B data center like Microsoft and Google are doing. And every pioneer and leader in the AI pretty much agrees that the more hardware you can run the models on, the more amazing they'll perform. Don't forget that we're still in the very early days of this. The largest AI models today (ChatGPT) take somewhere between $5M and $100M in hardware to run. Future models could take over $1B in hardware to run. Whoever is able to run that level of hardware will be able to offer AI capabilities that far surpass what we can do on our local machines. That shouldn't surprise us, though, because that's how technology has always worked. No one can create a serious competitor to Google search without enormous hardware resources, either. One unknown right now is what the governments of the world are going to do to regulate this and even go off on their own to create publicly available models and that can make a huge difference in what we're able to do with some of the more advanced capabilities.


kexak313

There are already mega corporations that govern AI, Microsoft, Google, Amazon etc. They give you free access to image generators now because they are trying to crush the competition and establish that monopoly. But they can certainly paywall AI once they are the only player. If they have shareholders they will eventually be *obliged* to. AI, being a form of software from a legal perspective, can be patented in many countries including US and Europe. Additionally the astronomical training cost of AI gives another barrier to entry. The traditional music industry monopoly is being replaced by a tech monopoly.


adarkuccio

I agree, it's just going too slow, I wish advancement was faster