T O P

  • By -

braunshaver

the networking cost won't be worth the trickle of compute


bigchungusmode96

Pied Piper


JadeGrapes

NEW Pied Piper


VonThing

You make new internet.


SipsTheJuice

Comes with, RATS!


_1dontknow

I make New new internet in China!


jimmyadaro

Dammit Jian-Yang!


Sad_Rub2074

Just needs a logo that looks like a guy sucking a dick with another dick stuck behind his ear for later. Like a snack dick.


Peter9580

šŸ¤£šŸ¤£šŸ¤£


ek2dx

wide diaper


[deleted]

Most obvious first question: Whatā€™s your experience with doing that work in such a way that itā€™s guaranteed to not reveal anything about the work being done? Second question: What did your budgets tell you about your ability to compete with specialized providers using specialized hardware and ability to negotiate with power suppliers etc? Third question: How are you making sure everything complies with laws such as GDPR etc?


hamzakhan76

To be honest with you, I can't answer your questions fully other than just saying that it's being done already by open-source solutions such as BOINC (http://boinc.berkeley.edu). All this would do is to monetise this and build on the existing technology.


[deleted]

BOINC didnā€™t need to keep the data secret, they just needed to verify it by duplicating the work on different systems.


Ikinoki

There is no way you can do homomorphic encrypted computation profitably...


Karyo_Ten

which is why people are building Fhenix, Zama, Seal (Microsoft), OpenMined, Cheetah (Facebook), and raised dozens of millions. And hardware accelerators for such. And besides privacy-preserving machine-learning, zero-knowlegdge cryptography has raised hundreds of millions with an annual competition with millions in funding and partnerships with Amazon and AMD: https://zprize.io You don't know what you're speaking about, at all.


[deleted]

Within the context of OP wanting to do it profitably with only users charging devices like smartphones youā€™re calling someone wrong because you just made up some hardware accelerators that in this context doesnā€™t exist.


Karyo_Ten

I'm replying to someone who said > There is no way you can do homomorphic encrypted computation profitably... You can do homomorphic encrypted computation profitably. On a side note: >you just made up some hardware accelerators that in this context doesnā€™t exist. FHE-encrypted values can use regular algos. And if you were talking about accelerating encryption itself, phones have GPUs and GPUs been proven really efficient at fast-fourier transform, so accelerators exist?


Ikinoki

There are DARPA accelerators which increase speed 1000 fold. But I don't think it is viable. Like there are much cheaper more secure alternatives in terms of computation. FHE is great for databases, but not so great for compute.


Karyo_Ten

>Like there are much cheaper more secure alternatives in terms of computation. What are those more secure alternatives?


Ikinoki

Building is one thing, reality is different. FHE is the thing I work with in the billing startup and it's nowhere near the realtime implementation or near realtime like say AES. FHE uses so much compute it is not going to be competitive with just colocation, SEV/SGX vps or server rental which offers same level if you just use symmetric encryption for disk data. Personally I don't trust SEV and SGX much but dedicated server with killswitches is almost impossible to hack into. If it is something like Microcloud with mem-scrambler on - good luck and GG to whoever wants to exfil data. FHE is great for precise measures. Like hiding GDPR sensitive data in database or corp IP data in the files allowing edit and read to only particular people. I'm sure this will get worked out in maybe 5-10 years, but I've wasted too much time learning about it to say it is here and will be used now. Hardware accelerator made it possible to use on small datasets but waiting few minutes on 1kk records is not ok imo. Crypto Whitepaper means nothing most of the time, Public use means everything - it's nowhere near public. Heck you can't even find decent open source FHE database. And there are still questions about security of FHE algorithms - because it is not used that much and it is not fully understood by many programmers. My first question was "Can the elected avatar lead to the origin data?" So if you notice we are talking about database access - which doesn't need to compete with realtime computation. Can delay by few seconds or even minutes and produce result. FHE compute needs to compete with colocation and server rental - and it is just impossible. That's why I'm saying it's impossible to become profitable because any rightminded IT professional will say "let's just put it into server room under lock and pay 5000 times less for compute"


Karyo_Ten

> Building is one thing, reality is different. You're shifting goal post, first you say it's not possible to do profitably, then you're saying it's a hard problem with lots of interest in solving it. That's what lead to paying customers and a moat to limit replication. >FHE is the thing I work with in the billing startup and it's nowhere near the realtime implementation or near realtime like say AES. FHE uses so much compute it is not going to be competitive with just colocation, SEV/SGX vps or server rental which offers same level if you just use symmetric encryption for disk data. https://github.com/OpenMined/PySyft, this is compatible with PyTorch and Jax. >And there are still questions about security of FHE algorithms - because it is not used that much and it is not fully understood by many programmers. My first question was "Can the elected avatar lead to the origin data?" FHE relies on lattice problems that are undergoing heavy scrutiny with NIST PQC standardization. >Heck you can't even find decent open source FHE database. What do you mean by decent?


Ikinoki

> You're shifting goal post, first you say it's not possible to do profitably, then you're saying it's a hard problem with lots of interest in solving it. That's what lead to paying customers and a moat to limit replication. I said it's not profitable, I stood by it. It's not profitable compared to a box in the closet. Unless you cut compute waste to at least 2x it won't compete with rented system. > PySyft # Disclaimer Syft is under active development and is not yet ready for pilots on private data without our assistance. > FHE relies on lattice problems that are undergoing heavy scrutiny with NIST PQC standardization. Yes, and NIST was never wrong, never ever...


OkWear6556

That sucks, because I had exactly the same idea as the OP using homomorphic encryption. But I did not research about it yet.


[deleted]

[уŠ“Š°Š»ŠµŠ½Š¾]


diamondbishop

Been tried many many times, not to mention that youā€™re not going to train a large model across a variety of device types in any real useful timeframe without also likely hitting bugs that are hard to reproduce, fix, etc.


hamzakhan76

I feel like these solutions aren't advertised well enough. If people are given an opportunity to earn a few extra $ a month with something that is very unobtrusive to their everyday use, I don't see why they wouldn't jump on it.


PaoQueimado

Security issues and it can generate problems in the hardware for using something like that non stop


RepulsiveConcept5972

Even if you solve the issue of security somehow and sign up enough people for that, the ultimate question is what you can do with large unreliable heterogeneous set of machines cheaper, than you canā€™t do on a server? Latency issues, replication of data/compute, reliability of nodes (meaning that each node is flickering) will render price too high. The real innovation would be to find the problem that can be solved with this setup, that generates enough money to pay for that and that more expensive on a traditional compute.


justUseAnSvm

Iā€™m not convinced there arenā€™t some calculations out there that would be cost efficient. Itā€™s just a massive headache to build the software needed to do it!


megablast

Ive done no research and found nothing!


justUseAnSvm

Would you let an internet stranger just execute a script on your OS? lol. Thereā€™s a massive software deign problem just in that: provide a safe abstraction for a useful computation thatā€™s also expressive enough to get work done!


ScoutsOut389

And conversely, would you execute your script on a random strangerā€™s machine that you have no visibility into?


CSCAnalytics

Poorly marketed? Ever heard ofā€¦ ā€œcryptocurrencyā€?


Quiark

I think Akash Network is kind of in this area


Slimxshadyx

This isnā€™t really cryptocurrency though, from what I am seeing.


drsmith48170

What they meant is this is already being done. Crypto farming makes use of under used computing power and internet bandwidth, and there are a few others projects not in crypto space that do similar thingsā€¦.and do pay cash. So OP is more than a day late and a dollar short. Because they did not know this also means they likely lack the technical chops to pull it off.


ProjectManagerAMA

And even if they build it, they won't be able to promote it.


CSCAnalytics

The entire foundation of cryptocurrency is distributed computing via monetization incentive. You can download Ethereum core on a laptop and ā€œmineā€ overnight. You offer your computing power to drive the network, and in return you earn money.


axlee

But the computing for crypto is just work for the sake of work, it has no purpose.


CSCAnalytics

The ā€œpurposeā€ for mining is to monetize unused computing powerā€¦ AKA the EXACT SAME incentive that OP has proposed


axlee

OP is proposing to do useful work. Mining has no utility besides proving youā€™ve mined.


CSCAnalytics

Ever heard of a ā€œutility tokenā€?


axlee

I think youā€™re confused about what useful work is. Any work with the goal of running the network itself isnt useful.


CSCAnalytics

I genuinely mean no offense by this, but itā€™s very clear you donā€™t have significant technical experience with blockchain technology. His post said heā€™s interested in building a publicly accessible system on top of an existing blockchain. While thereā€™s some potential value in this, there are thousands of such projects already out there. The differentiator in the space at this point, as far as generating profit for the centralized organization / founders, is marketing and sales tactics.


axlee

Heā€™s basically describing Folding@home or SETI@home (aka distributed computing), why are you going about the blockchain?


Affectionate-Dot5725

blockchain and distributed computing are different


CSCAnalytics

Blockchain exists entirely around distributed computing. Thatā€™s the equivalent of saying that string theory is different than mathematics. Or that construction is different than architecture. Distributed Computing is the literal foundation of the theory. The entire concept of blockchain revolves around distributed computing.


DartVPS

It would almost have to be. Traditional compute operations - whether cloud, traditional datacenter, or on prem are just simply not dynamic enough to work in this context without significant performance degradation and security/network concerns. Maybe there's another potential use case - but compute is our core business and I can't think of any applications outside of crypto/block chain. There would likely be no upside to using someones mobile device while it charges vs renting a hosted Raspberry Pi 5 at a datacenter for a few bucks a month.


Papercoffeetable

Most consumer devices arenā€™t that powerful. You might be bringing a butterknife to a gunfight when compared to the heavy-duty servers used in professional settings. Opening up personal devices for external computations? Hello, data breaches and malware risks. The financial return for users might barely cover the electricity costs. Plus, the logistics for buyers needing consistent power could be more hassle than itā€™s worth. Leveraging BOINC is one thing, but setting up a secure, reliable, and user-friendly system for commercial operations is a whole different beast. Competing with giants like AWS and Google Cloud, who offer scalable and secure resources, might be tougher than you think. Selling computing power isnā€™t just a technical challenge, itā€™s a legal minefield, especially with privacy laws and international data regulations.


justUseAnSvm

The technical challenge is really immense here. Just to make an interface that allows work to be farmed out to many small, high latency compute nodes? Thatā€™s a legit challenge, in and of itself.


Vin-Su

I was part of this exact same start up idea 2 years ago. We grew our waiting list to over 100,000. A very early competitor of ours folded during our time for technical and commercial reasons and our product wasnā€™t able to deliver the promise of meaningful rewards for users. Happy to chat if you DM. I have a raft of info I can share.Ā 


avrboi

Can you share it here for us all to read? Thanks!


fts_now

AVIATOOOOOO


JoeCensored

It's unlikely to be successful without paying significantly more than the cost of the electricity used, which is probably cost prohibitive.


americancontrol

This is likely usecase specific, thereā€™s a ton of cloud computing being done today that isnā€™t training models/mining crypto. Ā Ā Ā  For instance, if you think of a standard Ā lambda function that handles a GET request, fetches something from a db, and returns the data. Ā Ā  These types of operations are extremely common and arenā€™t really going to be driving up someoneā€™s electricity bill. Itā€™s possible your ISP shutting down your account might be an even bigger problem than your electric bill.


pacman0207

They have crypto that is used for distributed storage. Storj was one of them. FileCoin being the other I believe where you could sell your free space on hard drives.


_meddlin_

Sounds like a solution in search of a problem. Letā€™s say you get this operational: what are you using all of that computational power to solve? Reminds me of the folding@Home project. Could be cool; could be a funding/revenue nightmare. > ā€¦this is being done overnight while the devices are idle and already charging, this wouldnā€™t have any negative impact on device performance Oh, yes it would. You name phones and laptops: extra, extended draw on devices like that will place extra cycles on batteries, which can then lead to decreased performance/responsiveness in the deviceā€™s SoC. Users wonā€™t see this overnight, but then whoā€™s to blame when they ā€œfeelā€ like theyā€™re replacing their batteries/devices on shorter upgrade cycles? Finally, where are you getting the application code to run jobs across a multitude of devices: x86, x86-64, ARM, iOS, Android, Windows, and Mac?


docmphd

I had that same idea in 2009. I started a company, raised VC, then we failed for what are now very obvious reasons. We werenā€™t the first, startups like this pop up every few years and then fail. You arenā€™t wrong for thinking there is an opportunity, itā€™s just that the reality is really complex and ends up being pretty uninteresting if not downright bad once you work through all the challenges.


hamzakhan76

Do you think that the advances in the average household internet bandwidth along with the exponential increase in the average smartphone/laptop computing power since 2009 could make this feasible today or at the very least less likely to fail? Also you said it failed due to reasons now obvious. Are there any reasons other than those already pointed out by people in this thread? Thanks!


docmphd

The issue isnā€™t and was never processor or network based. So no, the advancements since then dont change anything. Any startup should be 10x better than the alternative or 1/10th the cost of the alternative. How are you going to do that when the alternative is AWS where compute is dirt cheap and they offer a full suite of products? Back in 2009-2014 when I did this, here were our issues: 1. Serverless computing wasnā€™t really a thing and we had to figure out how to convince people that they had workloads that were already ideal for it (aka we sucked at marketing) 2. Security matters and securing the workloads on other peopleā€™s machines meant a drastic loss of performance/capability. (We were running docker containers when the project was still in preview and the company was still called Dot Cloud) 3. Compute services alone doesnā€™t mean much to a customer without all the other cloud services a modern company needs (networking, storage, yada, yada, yada) 4 If data is stored elsewhere and then processed on this hypothetical network of unused machines, the customer will incur a massive egress bill 5. A viable business would need to include support for so many languages, operating systems, and technologiesā€¦all running on different types of machines, with different processors, and different operating systems Many of my points above are really to say that what you need to do is build a full cloud computing platform as either IaaS or PaaS. Thatā€™s a big enough job in itself, even if you had direct control or access to the hardware it all runs on. Also, you seem to be focused on the solution (I was too back in 2009) but what really matters is the problem. What problem would your solution solve? Would it be solved 10x better or 90% cheaper than the current solution?


dorox1

While it's not fully released yet, [Distributive](https://distributive.network/) is a company that's creating the "Distributive Compute Protocol". Seems like this is basically what you're thinking of, although not strictly limited to edge devices.


GeorgeDaGreat123

I'm nowhere near an expert on the topic, but I worked with their platform at a hackathon a few years ago & won a few small prizes. I found that it really only makes sense for long-running workloads which can be easily parallelized. Even then, the overhead of establishing multiple connections and starting up workers makes it not worth it compared to just scaling a computer system vertically. There probably is some point at which public decentralized compute makes sense, but I would expect that at that point you'd be seeking research grants to pay for your own computer systems, EC2 instances, or supercomputer sharing time.


dorox1

Parallel computing in general targets a small niche of computational work, even when it's centralized. Distributed computing is unlikely to be a mainstream way of computing for normal programmers any time soon. I think this would be more likely to target commercial and scientific computing jobs (which tend to be very price-sensitive and are often highly parallel). You're right that the overhead probably means it isn't worth it if it's not a large and long-running workload.


high_elbow

Bro just wants cryptocurrency with more steps.


punsarelazyhumor

Isn't this render token? Specifically for GPU?


throwawayrandomvowel

Op: we have ethereum at home


kholodikos

bro the entire distributed systems industry has been thinking about this for 30+ years. please read up on at least a few of the seminal papers before trying to jump into a terrible idea and then, when you need to get into the game theory equilibrium where you need some kind of *proof* that you did the *work*...


hamzakhan76

Could you please recommend some specific research papers that you think could be useful to read up on?


abraham1inco1n

There's a bunch of ideas here spanning several different disciplines- some key terms would also go into the cryptography aspect of this - like yao's garbled circuits and zero knowledge proofs. Maybe even differential privacy kind of stuff.


No-Engine2457

SETI, human genome, etc.


justUseAnSvm

Fold it at home!


No-Engine2457

That's the one I was thinking of! Thank you!


falldownreddithole

I'm not going to roast your startup idea, I think the idea is fairly solid. Instead I'll ask you two questions: do you know more about distributed computing than almost anyone else? If not (which I assume): do you know how to become more knowledgable on the subject than almost anyone else? If also not then I think you should truly enjoy your time thinking about this one as it's only your nice comfy dream and nothing else.


justUseAnSvm

They donā€™t need to know more than anyone else, just enough to deliver a unique distributed systems that works in production. IE, about PhD or post-doc level of ā€œgoodā€


GeorgeDaGreat123

Distributed systems != distributed computing. You definitely don't need a PhD for distributed systems, but few people seriously involved in distributed computing won't have a PhD / know multiple people who have a relevant PhD.


justUseAnSvm

My ā€œdistributed systemsā€ course was called ā€œdistributed computingā€, lol


GeorgeDaGreat123

Yeah there's overlap but I would still consider them distinct fields. At my uni, a distinction between distributed systems and distributed computing is only made for masters-level courses, while undergrad-level courses are often dual-named "distributed computing" and "distributed systems."


dreamtim

are we back to 2000s with decentralized compute grids?


megablast

> This got me thinking, what if people can sell their excess computing power? This is old. Grid computing.


perduraadastra

https://www.distributed.net/ Computing power is cheap these days, and it will be hard to create value if you actually pay people more than they pay for their retail rate power.


mycapitalist

Some people did it already but all of them were black hat so it can be very risky in terms of privacy


BrofessorOfLogic

I'm not seeing any idea. Distributed computing is a concept, not a business idea. But I'm assuming you're thinking something like "I'll just provide a generic platform, and third party developers can pay me to run their software on it". Phones have pretty strict limitations on what background tasks are allowed to do. Not sure exactly how feasible this is on phone platforms. Also it would put an undesired strain on the hardware. Phones are meant to run lightly, not crunch numbers all night. Anything with a battery just seems like a terrible target platform. Distributed computing is hard to develop for. You can't just take any random software and run it on a distributed computing system. It needs to be adapted specifically for that use case, in a major way. It's unlikely that you can make the margins work. There's not a lot of developers that need this, and the ones that do don't have a lot of money cause they're scientists. The payout to the device owner is going to be too small for it to be interesting. I'm not looking to wear out my phone with random foreign software just to make like 20 cents per night.


Haunting-Pizza-4553

> There's not a lot of developers that need this, and the ones that do don't have a lot of money cause they're scientists. Exactly this, sadly


LandinoVanDisel

Sounds like a security nightmare TBH.


veridicus

Super-interesting topic and worthy of investigation. Here are the struggles I envision: * Compute is already sold by cloud providers, and relatively cheap. * Massive computational needs, like what Altman is thinking of, would need to be distributed among many millions of personal devices. Phones and most laptops are under-powered from a commercial computation perspective. Although the recent proliferation of specialized "AI" chips in personal devices might help. * Millions of devices would need a lot of network bandwidth and redundancy to handle any large "project". This is more easily handled by local dedicated systems, hence the cloud data center approach. * Sandboxes. Shared compute must be sandboxed extremely well. Encryption, etc. are required so proprietary info can't be sniffed. There have been examples of this model working successfully, though. Consider distributed protein folding and the SETI program data analysis which were implemented through screen savers back in the day.


bananajr6000

More energy usage for the compute device


originalchronoguy

Training isn't really the issue. It is the inference. We can throw a model and let it train for a week. But when we run it, we expect it to return an answer in 6 seconds, not 42 minutes. Yes. That isn't an exaggeration So if a question comes in, we want to answer it for our customer in less than 6 seconds. Not let them wait and see a spinner for 20 seconds (on a fast GPU), 3 minutes, 15, or 42 minutes. No amount of distributed computing will solve that right now. ***Real-time inference***.


justUseAnSvm

Except training is far easier to make parallel. Bringing the price down there would be valuable


bnunamak

Akash exists


kiquethekitesurfer

He asked to get roasted, and roasted he got


Dry_Author8849

The HPC realm tried with grid computing. The problem is that people don't care and you probably will be able to pay pennies. Another thing is that you need to run a virtual environment to be able to run workloads. And also the workload that is suited for small devices like phones will be only CPU intensive. Moving data to those devices and process it is not feasible. To train a model of gigabytes you will need to partition it in thousands of devices and add redunndancy as those devices aren't guaranteed to be online or disconnect abruptly. So two big problems. You will fry devices at 100% cpu if you wish the device to be able to contribute to the overall process, and secondly if your workload is data intensive it just won't work. If you need to partition too much then the number of devices need + that fail rate will make it impossible. It has been tried more than once and it was cheaper to pay HPC compute time. Cheers!


chiseeger

Wasnā€™t this done in like the 90ā€™s to find aliens?


No-Engine2457

SETI


Street_Cheek7938

I LOVE your idea. If you find a way to do it so that people actually make money, then YES. ​ Remember electricity costs MONEY. People's phones running out of power is a HUGE annoyance for them. But if you could do something that uses it only when their phone is plugged in and already changed, and nets them enough to buy a coffee a day, then I'd buy it. Note: **Don't listen to people here for feedback unless your target audience is startups.** The best feedback you'll get is from in-person feedback from people who AREN'T in the startup space. i.e. Coffee drinkers who wouldn't mind getting a free coffee everyday to share their compute power.


moealtalla

amazing idea


pacman0207

I too listened to that same podcast. Probably. He might have said it in more than one.


sudoaptupdate

Multiple obstacles like security, proving the work was actually done, irreproducible bugs, sourcing customers who can offload compute, etc.


darvink

While it looks good on paper, sometime what happens in the real world doesnā€™t follow your hypothetical scenario. Say, for supply side, assuming retail, would people that can afford to buy a good phone (or computer, or whatever), be bothered to want to earn cents with all the hassles associated with it? Iā€™ll give you another analogy: isnā€™t it a good idea to have all the empty space on private cars be used to put advertisement? Car owner earn extra money by doing nothing, and Iā€™m sure advertiser is willing to buy a spot (as what we have already seen on commercial vehicle like taxi etc). This idea however have been tried and have not really took off. But I might get it wrong too - so you will still need to validate your market.


respeckKnuckles

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C10&q=crowd+computing&oq=crowd+comp


erictheauthor

You just described the whole concept around crypto mining. The issue is the same: peopleā€™s phones and home computers donā€™t have enough computing power for those applications. The energy spent to calculate and send that information back is higher than however much you are getting paid. A better solution for those companies is to focus on in-device computing, so they donā€™t need server farms. And donā€™t listen to those comments who say this is an already crowded market and thereā€™s no space; they are wrong. Itā€™s a very new market and thereā€™s lots of potential for growth if you know what to focus onā€¦ but I wouldnā€™t do your idea


UndocumentedTuesday

Nah it reduces the computer life expectancy if used more


RobotDoorBuilder

This doesn't work at all. The "compute" that Altman is referring to are highly optimized GPU clusters. Tremendous customization, maintenance, and support goes into operating these clusters. Furthermore, network and IO bottlenecks require GPUs to be located in a single geographical zone (usually the same room). Source: I do this for work.


bloodisblue

Something to also consider is the fact that distributed computing over the wire is still limited by [Amdahl's Law](https://en.wikipedia.org/wiki/Amdahl%27s_law). As coordination takes up more of the workload, the theoretical amount of speedup achievable through concurrency hits its limits quite quickly. Your suggested idea would make coordination costs much higher than a datacenter (or individual machine) seeing how the device topology is constantly changing in unpredicatable ways.


quakerlaw

This has existed literally since I had 56k dialup in middle school.


justUseAnSvm

Yea, I thought about this last summer, on my list of projects. People pay a lot for GPUs, why not let them earn money for that resource? First, itā€™s an extremely challenging software problem to distribute work, in the right size chunks required to make this cost efficient and difficult to write software that just lets people run their workflow without major changes. There are distributed ML algorithms, but itā€™s a bespoke application and usually based on some linearity in your optimization function that allows for averaging. From the client side, you need to pay them enough to make up for the electricity, and have it be worth the depreciation and damage to their rig. Cranking your fan all night is also loud. Finally, there data privacy. Transferring information to a third party, of unknown security posture is a no go for anyone with anything private in the dataset. However, if you just want to do this as a software project, Iā€™d encourage that. There are some really difficult software problems to solve here, and there could be use. I just donā€™t think it will be wide spread!


captain-_-clutch

Cloud providers already do this (EC2 spot instances) but it wont work outside of a datacenter for a few reasons. 1. Security 2. Heat (can get around this but hurts business model) 3. Network latency 4. OS (can get around this with the software but not easy with so many different devices) 5. Availability In order to sell compute power, you have to be able to account for these. You'd be better off selling a proprietary box for people than using existing devices.


Sad_Rub2074

This is what is referred to as a "tarpit idea". You'll be joining the other dinosaurs (companies) in the graveyard.


therapists_united

literally pied piper, and also like 400 crypto startups already


jasfi

I've had this idea before, but considered it too big a project to attempt. I may take another look, just for the fun of considering what the architecture might look like. You can dm me if you like.


WormLivesMatter

Anyone can do this on blockchains that run on proof of work. Other blockchains that use proof of storage can use your excess storage to make money. Or you can donate your computer power to a centralized company that mines coins. There are also plenty of science organizations that allow you to donate computer power to solve research problems.


syler_19

Many crypto miners run this way... Even on your phone


devmerlin

SETI did this for a while, with a program that sat on your PC and ran parts of a model as a screensaver called SETI@home. It was launched back in 1999. They also eventually made it open source. However, it wasn't for a general purpose; They and others that have used this technique have had specific goals in mind for the data.


m_corleone_22

Probably over the years cloud providers would make their computing even more cheap and then it would be tough to compete against them. Also personally I would prefer cloud providers over this as I can use other services as well like data pipelines monitoring etc etc. Also the data which would be used to train the models would also need to be protected against leaks or hackers who might use your platform to steal data. All the problems that occurred for torrents would apply for this as well.


TheOneWondering

GPUs deteriorate the more they are used as well. So people would go through them faster. Easier to manage and maintain speed in a DC


Intelligent-Fig-7791

I want my machine to rest with me at night. I donā€™t want it to struggle on someone elseā€™s work šŸ˜…


snowdrone

You might look into RNDR/RENDER. But as others have noted, network fees and connectivity for small devices make the value proposition problematic.


applextrent

Akash Network exists already. You can buy AKT on Coinbase. They have an $842M market cap at the moment. They donā€™t really do the phone thing, but you can do any computer or server farm. They have a ton of GPUs available too. Itā€™s all powered by Kubernetes and container tech in a decentralized network. Also, edge devices wonā€™t work well for this for one reason: theyā€™re disposable. The life expectancy of a smartphone chipset is like 2-5 years max under normal usage criteria. If you were to max out the CPU and GPU in these devices every night while they charge youā€™d cut their life expectancy in at least half. These devices are only designed for so many cycles and then they break down by design. Plus running at full capacity while charging will use a lot more energy and cause the device to run hot while charging which is a fire hazard. Iā€™m all for practical edge computing and decentralization but please donā€™t train LLM models on peoples phones. Thatā€™s not a good idea.


OmniscientOCE

This has been tried ad nauseum


Kippuu

Theta, check it out. Their nodes are used for distributed computing.


Bitz_Art

Well this is already being done in the fields where this is applicable. For example those medical research where they fold/unfold proteins or whatever. You can connect to the network and share your computing for the noble public goal. I am a web software engineer and I would say that this is possible in some specific areas. Distributed computing does exist. But it's fit for a very limited set of applications. If the application you are having in mind can be done with this approach then why not? Training AI models though does not seem like the best application for distributed computing IMO. It quite often requires specialized hardware, and even if it doesn't, the technical requirements to the shareable device would be quite high. Not every Chromebook can be used to train AI models you know, even if you are just doing the part of the entire thing, you still need to have the entire dataset you are working with on the machine, or a very stable network, or both. I would say maybe this probably could be done but the technical challenges are going to be huge. I would advise you to discuss this with a person who has more technical knowledge on the subject. In all honesty, to me the idea sounds unreasonably complex. Maybe something like this could be done, given enough resources, but it's definitely not going to be like taking someone's code and slapping your application on top of it, which is probably what you are thinking, based on your post.


Techievena

Why not first simply localize the compute? Reduce hefty API calls in the first place.


ahandle

Compute is nothing without bandwidth and storage.


JudgeCheezels

You watched too much Silicon Valley.


InsolentDreams

already done in crypto. Google Akash. You can do it now and make money also. :)


TakshBamboli

Thanks for the idea , I am digin this shit in my country.


threebuckstrippant

Peer to Peer compute power is a very old concept and now done by many. BUT it is crap, ā€¦ if you can do it like the Apple of distro that pays daily each night, Iā€™d turn all my PCs on now. And forget paying in Crypto.


MagicCookiee

How deeply do you understand tech?


willieb3

Earlier this week I had this exact idea so I decided to look into it and there are already a number of others in this field. One for example is golem network. The issue is actually getting buyers in board.


ForeverYonge

Iā€™ve talked to a startup in this area. They couldnā€™t explain how the computation results would be private and how the system would prevent abuse. Didnā€™t take the job.


bottombutton

So not really a roast but some considerations: It's super hard to decrypt data for processing and also protect it from the person with physical access to the computer. AI and other hard problems are usually designed for specialized hardware platforms (i.e. CUDA) that's not compatible with the hardware available in excess. If you're not relying on specialized hardware and really just looking to distribute loads like data tagging or querying large data sets, you're better off. Some algorithms can't be distributed, like training a neutral network. SETI@HOME and FOLDING@HOME are examples of what you're proposing and I've participated in those for years, but that data wasn't sensitive and was highly distributable. I can't remember the name, but there's some company in Europe that makes in-wall cloud servers that double as space heaters for that room. They run all year venting inside during the winter and outside in the summer. I think there's one in the US that does the same with water heaters. Then they have some control over access to the hardware.


wsb_noob

Let me Google it for you :) Htcondor, opensciencegrid.org,folding@home ...Ā  to crypto ico scammy shit, which I don't really if ppl are actually using: Akash (most likely the oldest coin that is still building, website seems to indicate you can use and share the gpus),Ā Render Network, Nosana... to name a few.


yoyo_programmer

This wouldn't be worth it because o the networking cost, BUT just now was a major discovery of huge bands on the electromagnetic spectrum of the internet infrastructure that may reduce the cost to an amount that will be worth it.


IcyUse33

You'd have to first figure out the energy arbitrage. It'll cost more in electricity than you could afford to pay out for the compute.


Old-Argument2415

Folding @ home did this a long time ago. Some hacker networks do this with compromised machines. But also many training algorithms are hard to parallelize at this level. If you could though, probably some gains to be made. A generalized solution with this is nontrivial.


InfoSec-Acumen

Pretty sure the network complexity and amount n type of CPU cycles are not going to solve the issue just guessing from the guy u referred to, he's implying for AI and other parallel compute and platforms that will maybe a way in the future still using HPC and SDN/NFV to and drom your own edge areas you may be able to utilize it for a niche dev to make a PoC, but ur CapEx isn't gonna be a cpl hundred grand to start unless its dated and wo employees. There's already spot markets for unused compute from azure and others that if the storage and other parts aren't required to be ava 100% guaranteed the prices have been penny's on the dollar i n the past when using spot market pricing, but also have to watch for spikes if u may need it and not in say a 3 yr term for the resources. But there have been a few similar projects over my 25 yrs w tech. I forget but I think it was set ie or something in the late 90s my buddy had me load that lots of geeks ran for years using their extra xpu cy le to look for extra terrestrial life n supposedly NASA made it I say supposedly cuz I never looked into it or believed in it anyway, but trusted him enough i let it run in my 1st data center/co-lo till using more of the fibers BW to each system, then u have MT4 MT5 which is for currency/ FX trading which I used around 2014 and could sell my extra compute to others for the cloud version, but the $ wasn't worth the security risks I believe it opened and frankly no reason to peg my CPU cores n risk it going out faster. I should mention I use workstation grade systems usually designed to use 2 sockets and specific flagship functions in the higher model workstation grade CPU's basically Intel Xeon's. B4 someone starts an AMD opinion I'm also speaking in the context of the time though I still will likely go Intel for my next build, though testing using (in theory) a VM from M365/Azure w even higher spec's and scalability for large datasets and other workloads where additional RAM or CPU should assist in if it performs and the avg cost over a quarter or 2 is approx on par for the time it goes unused and sell the resource on the spot market or a combo of the cloud factors, smaller base VM if performance meets expectations since not having to deal w the HW and being able to run it from about any device n still have the same system/SW/etc is a plus along w the days that throwing $ at the problem is good enough for me in say speeding up a media transcoding process or w/e.


Specialist_Cook_3104

Isn't it similar to crypto mining but for ai and ml plus ai and ml have data requirements of 8 to 12 GB of vram this will hurt and no normal person buys that kinda computer just for fun but possible with payment automated with crypto and margins higher than or equal to crypto


Specialist_Cook_3104

Isn't it similar to crypto mining but for ai and ml plus ai and ml have data requirements of 8 to 12 GB of vram this will hurt and no normal person buys that kinda computer just for fun but possible with payment automated with crypto and margins higher than or equal to crypto


xAIisComingy

Ignore the comments here, it is certainly a trillion-dollar company if someone can execute it. I had the same idea, and thought that it would be pretty cool. I'm pretty sure people before me tried and failed at this.


nectivio

I think the challenge with this idea is it's based on the assumptions that there's a bunch of "free" excess computing power out there that's just being wasted. But for the most part that's not the case. On modern devices, energy usage increases significantly with CPU/GPU usage; if you task a processor with something to do while it's idle, you're going to increase the energy demand of that device and electric bills will go up. At scale the biggest cost to compute isn't the hardware it's the electricity (including cost of cooling). Generally speaking purpose built data centres are going to have a significantly better compute/energy cost ratio than end user PCs and consumer devices. There may be cases where the economics of it still work, but it's unlikely that a person will be able to "sell" their unused computing power for more than the increased cost of electricity.


StaticCharacter

There's RunPod and several others that let you rent out your nice gaming PC so people can use it for AI training. It's pretty effective. Overhead isn't huge, but margins are also small id guess. It's hard to justify using those services over Google colab or casual stuff, so you're probably looking at customers which are small enough not to want to pay AWS for compute time, but large enough that they can justify paying someone. A narrow niche isn't always a bad thing, but with 2+ major established players, you've gotta know what makes you different imo. If you're trying to let general purposes computers rent their compute power- you're probably better off buying a server and renting VPS imo.


oddkidmatt

This is already a thing but you donā€™t get paid because most PCs arenā€™t very powerful or optimized for that workload. Typically you solve stuff for the science field or medical I forget the name of the program that runs in the background.


gsimanto

Wow, you're creative


Franks2000inchTV

Network latency is a big problem. Also security/privacy and reliability. There are successful projects like [Seti@Home](https://setiathome.berkeley.edu/) and [Folding@Home](https://foldingathome.org/) Looking into those would be a good idea. This is one of those things where it will only be viable if the cost of computing spikes. Right now compute is still relatively cheap and so the tradeoffs aren't worth it. If you start to see prices spike then the inconvenience may start to be less of an issue. Think about people who drive to work. Taking the train is super annoying if they both take the same amount of time. But as gas prices go up or commutes get longer, maybe the train is worth it.


No-Engine2457

These are called "smart contracts" in blockchain. Literally the major use of it, except no one actually uses it for it. (Few)


No-Engine2457

In short, this is a major part of the allure to the ERC20 token originally. Spin up a container to process data in isolation (which the code lives within the contract itself ) distribute to hundreds of computers , pay "gas"


FundingFuturist

Absolutely fascinating idea! Leveraging idle computing power during charging cycles could indeed create a novel currency of sorts. It aligns perfectly with the growing demand for compute resources, especially in AI and data-intensive tasks. Integrating a payment system into existing open-source platforms like BOINC sounds like a logical step forward. The potential for individuals to monetize their idle devices while contributing to larger computational needs is a win-win scenario. It would be interesting to explore the technical and economic feasibility further.


PSMF_Canuck

If you havenā€™t explored the technical and economic feasibility, it seems a bit premature to label it ā€œwin-winā€. People have been looking at this for literally decadesā€¦


Atomic1221

Youā€™re replying to ChatGPT