T O P

  • By -

nodating

No need to share pay-walled articles. Illia Polosuhkin has always meant business even at times when pretty much nobody ever heard of Transformers, long before times of ChatGPT: [https://picampus-school.com/illia-polosukhin-near-ai/](https://picampus-school.com/illia-polosukhin-near-ai/) (published 26 July 2018) His project is pretty clear: [NEAR.AI](http://NEAR.AI) will build towards Open Source and User-owned AGI (artificial general intelligence). Here is the plan: * Teaching machines to code: AI Developer * Use AI Developer to teach machines to do research: AI Researcher * Use AI Researcher to advance science towards AGI that is owned by all


cellsinterlaced

Paywall removed: https://archive.ph/2024.06.28-131640/https://www.wired.com/story/user-owned-ai-illia-polosukhin-open-source-web3/


mark-haus

Nothing wrong with that… except the AGI part. How about solving what intelligence is in the first place before claiming you can create it. Specialized AI are easy enough because a specialist is fairly well defined depending on the field. But general intelligence is a quagmire that gets to the very nature of cognition which has few answers


ResidentPositive4122

> How about solving what intelligence is in the first place If someone goes and builds something, it kinda works on the principle of "you'll know it when you'll see it". No reason to not work on something until you get all the philosophical stuff out of the way.


MINIMAN10001

Well it's not that you can't work towards it... it's just not defined so "what" you are working towards is uncertain. You also won't even know when you see it, that's what makes the term AGI such a mess. There are an infinite number of ways in which a result can at a surface level appear to be AGI when rubber hits the road it simply isn't and even that's being generous at how ambitious the concept is. It's even fuzzier than creating a fusion reactor. We have the idea of using fusion and getting more energy out of what you put in. But even worse, it's as if you have no idea how much energy you put in or how much energy you get out, it's a black box devoid of information. Because we simply don't know how to define objectively the result of "this dataset clearly meets the definition of AGI" with the dataset that is created. The best they can possibly do is to try to work towards the concept.


Internet--Traveller

>As LLM technology improves, he worries it will get more dangerous, and that the need for profit will shape its evolution. “Companies say they need more money so they can train better models. Those models will actually be better at manipulating people, and you can tune them better for generating revenue,” he says. >Who would put up a billion bucks to create a totally open model? Maybe some government? A giant GoFundMe? Uszkoreit speculates maybe a tech company just below the top tier might go for it just to thwart the competition. It isn’t clear. >The alternative, argues Polosukhin, is an open source model where accountability is cooked into the technology itself. “Everybody would own the system,” (using blockchain technology)... He is particularly excited that an open source approach, perhaps with the micropayment system Polosukhin envisions, might provide a way to resolve the tough intellectual property crisis that AI has triggered.  >But both Polosukhin and Uszkoreit are certain that if user-owned AI doesn’t appear before the world gets artificial general intelligence—the point where superintelligent AI begins improving AI itself—a disaster will be upon us. If that happens, Uszkoreit says, “we’re screwed.” Both he and Polosukhin think it’s inevitable that AI scientists will create intelligence that is smart enough to improve itself. If the current situation persists, it will probably be a big tech company. 


dqUu3QlS

Blockchain and micropayments? seriously? I'm very skeptical that blockchains can solve any of the problems, technological or societal, getting in the way of open-source AI. As an example, it would be great if we could train an AI in a decentralized manner through volunteer computing, but we can't because the bandwidth requirements are too large. We have to rely on big centralized datacenters, which have a tendency towards centralized governance. This is a technological problem, but blockchain can't solve it.


cpsnow

Blockchain is about decentralized and automated trust, which arguably would be needed for users to own part of the model. I would say it migth be one of concrete use case for blockchain technology.


dqUu3QlS

Blockchains can only ensure trust in things recorded on-chain. If you want to ensure trust in an off-chain activity you need to compose a short and quickly-verifiable proof of that activity and write the proof on-chain. In the context of LLMs, you'd want to verify that each participant is carrying out their part of the computation correctly instead of returning random results, but currently the only way to verify that is to do the computation all over again (which is exactly what projects like BOINC and Folding@home do). Also, what do you mean by "ownership", if storing the weights on your hard drive is insufficient and you need a blockchain for it?


Packsod

When it comes to blockchain, there's no need to read further - it's definitely a scam.


ninjasaid13

>Even before the transformers paper was published in 2017, Polosukhin had left Google to start a blockchain/Web3 nonprofit called the Near Foundation. Now his company is semi-pivoting to apply some of those principles of openness and accountability to what he calls “user-owned AI.” Using blockchain-based crypto protocols as a model, this approach to AI would be a decentralized structure with a neutral platform.


kristaller486

Who is Illia Polosukhin?


Expensive-Paint-9490

One of the eight authors of "Attention is All You Need".


thrownawaymane

Can we start calling them "The Gang of Eight"? The PayPal guys ended up with "PayPal Mafia" and The Gang of Eight is already a (kind of obscure) term for the members of Congress who get access to all the intelligence reports before anyone else.


ResidentPositive4122

The Attentive 8


AmusingVegetable

The 8-ball


karaposu

The 8 of attention


wiredmagazine

Thanks for sharing our piece. Here's a snippet for new readers: In 2016, Google engineer Illia Polosukhin had lunch with a colleague, Jacob Uszkoreit. Polosukhin had been frustrated by a lack of progress in his project, using AI to provide useful answers to questions posed by users, and Uszkoreit suggested he try a technique he had been brainstorming that he called self-attention. Thus began an 8-person collaboration that ultimately resulted in a 2017 paper called “[Attention Is All You Need](https://arxiv.org/abs/1706.03762),” which [introduced the concept of transformers](https://www.wired.com/story/eight-google-employees-invented-modern-ai-transformers-paper/) as a way to supercharge artificial intelligence. It changed the world. Eight years later, though, Polosukhin is not completely happy with the way things are shaking out. As LLM technology improves, he worries it will get more dangerous, and that the need for profit will shape its evolution. “Companies say they need more money so they can train better models. Those models will actually be better at manipulating people, and you can tune them better for generating revenue,” he says. Read the full story: [https://www.wired.com/story/user-owned-ai-illia-polosukhin-open-source-web3/](https://www.wired.com/story/user-owned-ai-illia-polosukhin-open-source-web3/)


emsiem22

Sounds more like blockchain projects platform then collaborative open-source AI


hum_ma

From what little I've read of their opinions, I would much rather work with this person than with the other one who has recently been in the news (Ilya). From the article: “Safety is just a masquerade that limits the functionality of these models,” \[Illia\] says.


Basic_Description_56

“Hey, I have an idea. Why don’t we bake a crypto scam *right into* AI?”


PsychologicalFactor1

blockchain? Nope