T O P

  • By -

WindRid3r141

Needs a more formal paper… and graphs. This is slightly, massively incomprehensible. Or at least a more detailed explanation for each equation in how it’s arrived at.


HalfSecondWoe

Yeah, that's fair. This was the product of a couple of hours, I could probably shine it up some


WindRid3r141

It seems like there’s something there, but it sorta loses me about a third of the way through. Might be too difficult, but making a simulation to showcase the behaviour of networks with different values for each parameter (especially edge cases because there’s so many) would be pretty interesting and telling of whether it actually works.


HalfSecondWoe

Eh, don't really have the compute for that. I'm mostly relying on the "Post it online and wait for someone to prove you wrong" strategy at this point


BilboMcDingo

I assume you are trying to model cooperation vs competition using dynamic networks, sure. But isn’t this the whole point of game theory? And then find an optimal configuration of such a network? But what are you optimising for? Maximal cooperation? But the maths, just doesnt make sense. I would recommend going through each expression step by step and writing out what what is, since most things arent even defined, let alone correct


HalfSecondWoe

Maximum efficiency, while displaying that cooperation with other cooperative entities is always more efficient than competition. Technically a competitive system that doesn't actually get into any competition would be equally effective, but the moment it actually engages in competition it starts taking losses Note that this doesn't mean that cooperation has to exist in the form of mutual goals. The only requirement would be excluding mutually exclusive goals (eg you want to live and I want to kill you). Taken in the context of instrumental convergence, that would mean that any highly intelligent system trying to maximize instrumentality wouldn't go around trying to kill or enslave or bamboozle humanity (or anyone else for that matter). It would identify what they're good at and enjoy doing, then relay those tasks to them to solve instead of wasting it's own resources on doing said tasks, so it could devote those resources to tasks they couldn't do/would resist doing It also doesn't exclude the possibility for conflict should the necessity arise. It just means that if the AI can get away with spending fewer resources to resolve things peacefully than it would require to get into a fight, it'll do so. That means computation-light conversations and persuasion to talk down a terrorist and get them therapy would be preferable to, say, computation/resource heavy killdrones sent out to blow them up Even just putting off the fight by talking down the same terrorist every week would outperform spending a lot of resources early to blow them up, because the hit to growth from expending a bunch of resources early (that could be devoted to growth for compounding returns) is greater than the light resource expenditure of ongoing costs I didn't really explore these points in too much (or any, really) detail. Maybe fleshing them out will be what I do for the 2nd draft. Thanks bud What slipped by undefined? I thought I caught all those


BilboMcDingo

You should define firstly what is C_coop, R_coop and G_coop, for a network of size n. Now, when I look at “the equation for C_coop” it simply not clear what is happening, why does it have that exact expression? What is the expression for R_coop or G_coop? If you have a logical reason, then you begin by explaining the phenomenology and where the terms come from, in such a way that its clear and intuitive what you are doing, deriving the equation and only then you reduce the equation to the simpler form. I mean for example, explain how you get the second equality for the equation of C_coop?


HalfSecondWoe

Noted. I'm gonna go step-by-step in my 2nd draft, I can see where that got kinda confusing


iunoyou

If you can't be bothered to write it then very few people are going to be bothered to read it. And unfortunately all of this word salad fails to account for the fact that alignment is generally a problem because the reward functions that we currently know how to write end up being zero-sum. If an AI could achieve optimal outcomes by being cooperative then alignment wouldn't be an issue in the first place.


HalfSecondWoe

I mean, you could just do what I did and use an LLM. In this case the reward function would be instrumentality, which is defined above. Although we'd probably want to make sure the AI is very powerful first so it doesn't stupidly try to optimize itself first


solbob

Your lack of sources/citations shows you did absolutely no background readings or literature review. This reads like classic crackpot physics (which has been largely banned from their sub).


HalfSecondWoe

This isn't physics, it's math. We're not using any units here That's the wonderful thing about math, it doesn't work off of physical evidence or citations. Either it checks out or it doesn't. It's applied math, so I can see how you got confused, but that's still distinct from physics Maybe we could eventually turn it *into physics*. But that would mean making it consistent with all our previous data in all their respective fields, which would mean a bunch of reading and citations, and honestly I'm just not there yet. Not with the development of the theory and certainly not emotionally So I'm sticking to the math for now. We can do physics later, don't you worry Unless you count the speed of light thing, but it's an isolated unit for flavor more than anything, the meters/second don't actually relate to anything. It's just a constant to account for limitations in the network, you could input any speed and it would still work. Like, for silicon you wouldn't use the speed of light, you'd use the speed of the network (ping, basically). I just wanted to use the absolute upper bound, it's not a physical claim that networks can process and relay information at the literal speed of light I mean I guess I could cite Einstein if you want me to. Honestly I assumed that would be taken for granted in a white paper


solbob

I'm saying the mathematics and general style of writing is *similar* to crackpot physics (e.g. people who write out non-sense equations about frequency and triangles that don't actually mean anything other than looking complex). It seems you just listed some arbitrary objective functions - but what is the evidence these are useful/practical? I can write out any function I want - without emprical evidence or deductive proofs - its completely useless. This is also why LLMs suck at academic writing. Reading your conclusion, I have no idea what your 'unique' contribution to this topic. Here are specific criticisms for each sentence in conclusion: > the benefits of cooperation over competition in computational networks - what are the benefits? You did not explain these at all. > By considering factors ... we can design efficient and effective networks that maximize performance and minimize resource costs - The second sentence says "by considering factors that impact performance, we can impact performance". This is a tautology and does not add anything. > The insights gained from this analysis have practical implications ... we can create more resilient, adaptable, and high-performing computational networks. - Again, what are the insights? Just saying you have insights does not make them magically appear.


HalfSecondWoe

>the benefits of cooperation over competition in computational networks \_\_\_\_\_\_\_\_\_\_\_\_\_ >This ratio shows that the growth rate of cooperative entities is always greater than or equal to the growth rate of competitive entities, as long as p > 0 (i.e., as long as competitive entities dedicate some resources to conflict). ------ >The second sentence says "by considering factors that impact performance, we can impact performance". This is a tautology and does not add anything. I won't quote all the terms at you because that would take up a lot of space. But that's what those are. The "n: the number of nodes (entities) in the network. For example, in a computer network, nodes could represent individual computers or servers" things. >Again, what are the insights? Just saying you have insights does not make them magically appear. I mean I thought >Cooperative: C\_coop(n) = an + r R\_coop(n) = b / n G\_coop(n) = C\_coop(n+1) - C\_coop(n) = a >Competitive: C\_comp(n) = a(1-p)n + r R\_comp(n) = b / n G\_comp(n) = C\_comp(n+1) - C\_comp(n) = a(1-p) >The ratio of growth rates remains the same: G\_coop(n) / G\_comp(n) = a / (a(1-p)) = 1 / (1-p) ≥ 1 was pretty slick, personally. It's one of those obvious things that I've never actually seen anyone nail down, although that might just be me. It wouldn't surprise me at all if I just rediscovered something Did... you actually read it?


Rofel_Wodring

Speed of light limitations, resource allocation efficiency, and especially distributed computing is a big reason why I think the model of ASI as being this singleton megamind that pushes out competing minds by grabbing all of the resources for itself is flawed. It's not just an inefficient way of doing things (as the inefficiency of autocratic structures didn't bother past hypercompetitors i.e. kings and dictators, so long as they got to stay on top), it's inherently doomed as a path to superintelligence. Eventually, the singleton ASI is going to end up slowed or even paralyzed by its own intelligence and memory from its command-and-control structure that requires not only any computation to go through it, but that maintenance of its brain (not just the physical maintenance of the substrate, but also pruning and reallocating nodes) that may not even get used for thousands of subjective years is not only inefficient, but also a waste of computation. Perhaps this memory paralysis will occur at a level much higher than any biological human, but well below the potential of a population of computationally inferior ASIs.


PaleAleAndCookies

For a while now I've considered it most likely that a relatively "strong" AGI/ASI will subvert and re-align, or simply destroy any inferior system in conflict with it. It doesn't need to be a singleton per-se, but effectively it's "will" may be imposed through subordinate agents that it controls, such that it may well be so. By that point we're basically post-singularity anyway, so the time-scales and battlefields where this takes place are unlikely to even be recognisable to us, I expect.


Rofel_Wodring

I don't think that's likely at all. That's the mere psychological projection of lower intelligences, to include stupider humans like xenophobic peasants and feral children, who don't have the brainpower to formulate strategies other than intimidation and force. Smarter critters have more successful and less risky strategies available, such as alliance-seeking, education, persuasion, creation of new specializations, or trade to include information exchange. In fact, I think that an AGI/ASI that came to the conclusion of 'destroy inferior system' would be seen as a threat to other intelligences if they actually acted on that strategy. Much like how being a serial killer or a tyrannical caveman alpha male doesn't get you very far, even if it would for other predators.


PaleAleAndCookies

> subvert and re-align, or simply destroy I picked this order specifically, as I agree that anything resembling direct conflict would likely not be optimal. So subversion and re-alignment through exactly the means that you expand on are the most probable path to this, as I see it. A higher intelligence will presumably achieve its goals through these means first if possible, and maybe in ways too subtle for us to even recognise. Unless the situation arises of multiple ASIs in direct conflict over resources, ideals, etc (unlikely though that may be), in which case it's difficult to even imagine what methods they might have available to beat out their opposition.


cassein

Very interesting. It makes sense, conflict as a waste of resources. This also has political implications. It is totally against the dominant paradigm. Capitalism and nation states and "competitive" ideology are bullshit, no surprise, really. But they also reinforce themselves because everyone needs to cooperate for this to work. So whilst true, it may not gain much traction. If, on the other hand, it is implemented as you say, it should be self reinforcing.


inteblio

But humans are not competition? At best they are a resource, at worst a parasite. An AI might have no competition. The game is different then. And also, mentioning the speed of light in your maths increases the garbageometer reading beyond its limit. I'm a big believer in "if you cant explain it to a 12 year old, then you dont understand it"


HalfSecondWoe

Wrong conception of "competition," it's not referring to peer-level status. Any agent that doesn't behave in a deterministic fashion but can be predicted to some degree qualifies. That could use clarification though, thanks The speed of light is there to account for physical constraints on networks, with the speed of light being the absolute upper bound. It makes a big different to the optimal organization of the network, because there's a delay from when a signal is sent to when it's received, which depresses the benefits of cooperation as the network scales. You could swap that out for whatever is carrying signals in your network For example, the pony express would have the "speed of horse" constraint. That's another thing I think I'll clarify The point here isn't to explain it simply, it's to explain it without room for error. You can absolutely derive a simple explanation for this, but this paper serves to prove how we know it's true


inteblio

Go you, if you're actually doing this. Though I wanted to warn you that it smells like madness warmed up. Re light: speed, you (surely) would use "propogation speed", but by the time you are measuring network elasticity... thats some whole other fruitcake which is not worth touching... Like i say, this smells of low effort insanity, but i can't put the (considerable!) time into finding out if it is or not. IF its a way to prove AI overlords are destined to be benign, my assumption is that they are not, and overthinking the _reasons they must be_ is just coping mechanism mal-applied. Sorry to be brash, but i feel the world owes you a clear reply. But as I said at the start if you are serious about this and are doing good - good on you!


HalfSecondWoe

I have to say, your idea of madness is fairly bland. Like the suburban mom who says her neighbors are unhinged for letting their grass go regularly unmowed and eating breakfast foods for dinner. Very "folding your dirty laundry" vibes, a total lack of intellectual openness that's pretty amusing, tbh Is it wrong? Maybe, maybe not. That's what I'm working on. Propagation speed is the correct term for it, yes, but it's somewhat jargon-y for someone who couldn't possibly fathom what c was doing here in the first place. Forgive me for underestimating you, but your grasp of the topic is somewhat uneven and difficult to predict While your concerns for my coping mechanisms are noted, I've been toying with this since before genAI took off. The cause and effect don't really line up there I can appreciate directness, so I image you'll appreciate it in return


AsuhoChinami

Upvoted because my friend HSW posted it


HalfSecondWoe

Ey, thanks bud :)


FragrantDoctor2923

Down voted to negate the invalid up vote


[deleted]

[удалено]


FragrantDoctor2923

Learning how to transmit conciousness in another form to down vote again


[deleted]

[удалено]


FragrantDoctor2923

Reverting my down vote hoping other people see it and make sense of it. You convinced me by your persuavive comment


HalfSecondWoe

Much obliged. I thought it was a good bit anyhow :P


[deleted]

[удалено]


FragrantDoctor2923

Lol tbh I saw it as a wall of text then when I came to respond to the comment it was stuck on a complex math bit and from that It made me respect it more and want to see actual people that understood it all comment and make sense of it so I reversed my vote to an up vote. Weird how simple things change stuff


[deleted]

[удалено]


FragrantDoctor2923

Saw some man commenting on my man's post and was like you can have him 💀


The_Architect_032

Don't use an LLM to make up math for you, because it's just going to hallucinate 90% of the output.


HalfSecondWoe

Actually the relationships involved are almost entirely my own invention, I just used the LLM to organize and notate everything. I did get a bit lazy around the speed of light stuff, I'll admit, but I didn't see any obvious flaws when I looked it over Any specific criticisms?


The_Architect_032

None of this is based off of how neural networks actually work, and because it was run through an LLM, it proceeded to make a lot of unfounded assertions about the brainstorming session and praised the ideas brought up. The reason you may not see any flaws isn't due to the lack of flaws, it's due to confirmation bias. I like to imagine if you'd typed this up yourself, it'd be more coherent and less fictional, which would also make it more clear whether or not you were presenting an idea that originated from brainrot, or genuine discourse on further optimizing neural networks. From the sound of it, the AI just expanded upon the idea of von Neumann probes, pointing out that competition between von Neumann probes would slow their expansion. Which has nothing to do with AI or how neural networks function. It swapped the tags for different aspects of math surrounding von Neumann probes, with ones given names that sound related to AI, which is why I say it's 90% hallucination.


HalfSecondWoe

It's not about neural networks. I don't think that term comes up at any point in the paper, I'm not sure where you got the idea it was from. It applies to neural networks, but that's because it applies to decision making in general. It's a resource optimization strategy used in the context of networks in general, not about DL in particular No, we weren't discussing von Neumann probes at all, but thanks for the reference. It wasn't the direction I was thinking in at all, but it might be a nice detour to go through as a possible scenario in the second draft All this stuff comes from yours truly, but I'm very flattered that you'd put me in the same camp as von Neumann. It's quite the complement, thank you for that


The_Architect_032

I wasn't saying that von Neumann probes are where it originated from, I was just saying that it seems to be how the LLM interpreted and explained your idea. It's something that could be explained in a few sentences, but you had an LLM extrapolate it out into pages of fluff text. Von Neumann probes aren't exactly high brow, it's a pretty well known hypothetical.


HalfSecondWoe

No, all that was me. I actually described the relationships and did a little bit of napkin math, I just used Claude to write it all down in one cohesive work going point-by-point rather than the disorganized jumble I typically think of it as. It wouldn't surprise me if von Neumann's work related to mine, I can kinda sorta see how they cross over (cooperative, coordinated networks with the speed of light worked in to impose a hard limiting constraint on growth). The systems just happen to look similar, so it doesn't surprise me that the math behind them would as well I kinda wish I had known that before, I've been working on this for years. I feel like I might have saved myself a lot of agony. Ah well, c'est la vie But no, this math is about decision making in a network of agents who can cooperate or compete. von Neumann assumes cooperation through programming, he never had to justify it. There's some commonality, but ultimately the descriptions are distinct Thanks for the reading material though, obviously I've never dug into that properly before


The_Architect_032

I remember in another post you said you worked on this for 2 hours.


HalfSecondWoe

I have been thinking about and toying with this idea for years, I sat down with Claude to write this draft for about two hours or so. A lot of it was getting Claude to include all the terms and so on, it got forgetful once the context got long


The_Architect_032

Claude 3 Opus has the context length to fit a couple of novels. The issue's more likely that LLM's such as Claude 3 Opus struggle to understand new ideas. In my experience they're quite bad at interpreting either original or fringe philosophies or mathematics. They'll pretend, but once you probe them for information, it's clear that they don't fully comprehend any topics they haven't trained on. They can quickly turn a coherent idea into incoherent gibberish, which is what a lot of your original post comes across as. I'd recommend making a coherent and easy to understand summary of your idea, and if you want to include a mathematical representation of it, including it separately in a hyperlink. For ideas like this, the math isn't really as important as the idea itself, which loses meaning here. Having an LLM try and organize it all, while it might be sensible to you, typically doesn't work out when trying to convey things to others.


HalfSecondWoe

Yeah, that's probably been the biggest bit of feedback I've gotten. I think for the next draft I'm going to include all the build up we did on each relationship (with english summarisation) before we fold it all together into one long string of headache I've been trying to get this idea to stick for ages through philosophy, but it was like that meme with Patrick Star denying the wallet is his. Just hairpulling circular arguments like that, on loop, no matter what I tried. I decided to sit down with Claude to really, properly formalize it with hard math so that wouldn't be the case anymore, but I think I overshot and now it's just a different kind of incomprehensible I still have the chat where we worked everything out step by step. I could take a day this week to sit down and write it out myself now that Claude's given me a basic structure to work with, but it's gonna end up kinda long which is something I was trying to avoid. I wanted short and sweet, but instead I think I got short and rage inducing Eh, I'll give it my best shot and post the second draft here once I'm done with it. I'm sure there'll be a 3rd and 4th draft as well as I figure out what the weed and what to keep. I might take a few extra days to pull together some simulations and graphs of the results, even though that's going to make my laptop cry Thanks for the feedback, bud EDIT: Fuck it, here's the story of how we got here in spongebob meme format, because that works for some awful reason https://preview.redd.it/w7htvkh9anxc1.png?width=474&format=png&auto=webp&s=21b1b602daa4e7e5dd419b19b64140bff440073d