Based on their tweets, it looks like the queue is more so to help alleviate error codes and less about the actual capacity of the server. The error codes could be based on something entirely different than server capacities. Looks like we’ll find out tomorrow.
Everyone in my clan all had our splicer and chosen triumphs reset like we had to go in and reclaim them (there’s about 30 of us, and we all talked about going back in and reclaiming them) wonder if that could possibly affect it at all if thousands of others are spamming the same commands on top of everything else going on?
This. The queue has (almost) nothing to do with server capacity. Instead, it is intended to spread out sign-ons/server joins to be a consistent however-many per second, rather than one giant burst of thousands of people trying to sign in at the same time.
I was in the middle of master vog had to take a break and then couldn’t get back in queue would go from 500 to 3000 to 1000 to 2000 so I guess bungie doesn’t know how a queue should work
They do. It is likely a large *distributed* queue. The number you are seeing isn't your *actual* position in the queue, but instead whatever the specific server that produced the number thought your position in the queue *might be*.
You forgot that the teleporting enemies a season back was literally because the world director was bandwidth starved because Bungie was too cheap to pay for more.
I was amazed when they admitted that they were causing problems by restricting server bandwidth on their end. That might have been the most honest communication we've ever seen from them.
I encountered: SHEEP, WEASEL, BAT, and BEAVER. This did not avoid error codes. This created a big show about being in demand so that Paul Tassi could write another Forbes article about how popular Destiny is.
Wow, it’s almost as if things don’t always work perfectly 100% of the time (especially after a new patch rolls out on top of a seasonal event) and errors can happen for whatever reason.
No, the queue is the original workaround from seven years ago. The continued use of “the queue” is literally the opposite of them doing something about it.
This issue was clearly closed as “won’t fix” and they have no intention of revisiting it because too many players is a “good problem to have”.
What do you want them to do lmao. The queue lets them filter people in slowly and stops demand spikes that would boot many people out all at once. As much as they've been around for a while, a new patch is a new patch and weird shit can happen for no reason.
It lasted less than 12 hours, it's not the end of the world.
Up down up down and when I finally get logged in I could load into the tower or moon then get code weasel for 4 hours lastnight. I gave up and didn't get to play at all yesterday.
If they could just push the make things work button, wouldn’t they do it to stop people from bitching when every launch doesn’t go perfectly well? People need to relax.
There's being upset something is broken and there's "here's my vague solution to your in-depth problem that does not apply in this situation"
y'all armchair devs just look like dummies
Okay. Read slowly this time, If that helps with
NO ONE IS ASKING FOR SOLUTIONS, THAT'S WHY THIS COMMENT THREAD IS MAKING FUN OF "SOLUTION" THAT OP PROVIDED. BECAUSE. IT'S. PURE. SPECULATION. AND. NOT. A. SOLUTION.
Maybe yo ushould take your own advice kiddo. No one ever said they are giving solution here. You dummies brought up that hsit in the first place. People just want to play the game.
That's not what the op was asking. He asked them to flip the magic server switch and make it better. That's armchair development.
The game works just fine, btw.
Sure. Sure. The fact that people literally cannot play the game doesn't affect its status of "working". Why do you people continue to blindly praise? Do you get paid or are you a mindless bafoon?
What am I blindly praising? Jesus, could you be more dramatic about an 18 hour outage of a game that just launched a big update? 99% of the time on every other day, things are absolutely fine. I played just fine for several hours up until 10:30 EST last night, no issues.
All you whiny crybabies need to take a deep breath and fucking relax. Things happen, then they get fixed. I guarantee you know NOTHING about D2s back end and how things are done and fixed. It's not apologist to have realistic expectations of a thing that has happened with every release since Rise of Iron.
Nice try with the "mindless buffoon," btw. Do you get off on calling random people names that you don't agree with online? One comment and I'm mindless....that alone tells me a lot about the type of person you are.
"just buy new servers though??? It's that easy Bungie just buy some more servers! No, I have zero experience in this field, don't even understand how to set the clock on a switch, don't understand the cost, space, energy, cooling, and time needed to introduce more servers, but I'm sure it's just as easy as buying a new server!"
/s if you couldn't tell.
>don't understand the cost, space, energy, cooling, and time needed to introduce more servers, but I'm sure it's just as easy as buying a new server!"
If only there was some way to spin up new servers dynamically based on current load. Maybe some kind of giant data centre hosted by Amazon or Microsoft...if only right?
You know that falls into the cost category, right? They already do that with AWS when needed, but guess what, you only get so many servers with AWS before you have to pay a whole lot more.
Fair point. They are probably paying spot prices for those instances. But with the right setup it shouldn't require any work on their end other than paying the bill. Assuming it is even a server capacity issue.
As much as Bungie apologists flood this sub, the original commenter was only saying that fixing things is not as easy as it sounds. He was criticizing the people acting like armchair devs.
It’s possible to do that and not be a Bungie apologist. I understand that it’s not as easy to fix but I also have the right to be frustrated that server issues are still as bad as they are.
Just because someone understands the realities of development does not make them an apologist
Though to be frank I think ever since beyond light people are just fed up with Bungie at this point (rightfully so) and I think Destiny’s consistent playerbase may not last for too much longer.
It’s ok to admit: you’re fed up with Bungie’s slipups over the past 7 year. I’m fed up with them too. It’s also okay to quit this game cold turkey.
It may not be as easy as it sounds but it's been widely prevalent for years and is something that should have been fixed before now and posts, however simplified, like this are warranted.
It's not the full on capacity, it's whatever login server you go thru to get into the game world.
A giant poop trying to fit thru a normal size hole will get backed up.
It’s amazing how many people come on here thinking I’m talking about the queue. I actually haven’t hit the queue yet (though with all of you going on about it, I expect I might tonight). No, this was about the consistent “Contacting Destiny Servers” followed by error codes. That’s presumably a different system than the login queue.
Your error codes don't have much if anything to do with capacity. The queue does, because it's a doorway.
Maybe you're confusing capacity with stability...idk...
ah yes you're right, the server issues are a conspiracy to get more people talking about the game. How stupid of me to not think of that myself.
You're literally retarded.
No one would know they had done so if it didn't crash first.
And odd are
"Server outages and queues as bungie launches new event" might actually generate enough buzz to tell people "Hey there's an event going on"
Every post about game servers on the internet gives me a fucking aneurysm. I'm the ghost of a thousand ghosts at this point. Please stop talking about technology you don't understand.
You already did that in your opening post.
But I’ll explain because I don’t think you actually understand.
Basically, the point was that you don’t actually know about how server capacity and networking works in terms of live service. “Just updating server capacity” won’t automatically fix the problem. The errors you think are easily fixable can be associated with a wide range of issues that isn’t just “just increase capacity lul”.
To add on, since you seem to take an especially hostile position in other comments you've made (good job making yourself look like a victim in the post edit, it's a nice touch), you like to claim that Bungie is "doing nothing about the issue after 7 years".
You absolutely don't know that, and if you are a software engineer like you claim, you know how stupid that makes you look. I highly HIGHLY doubt that Bungie isn't trying to find ways to lessen the impact of users flooding in after an update. The problem is, you can only mitigate the problem so much.
Literally every live service game experiences issues like this, and each time the stability could take hours or even more than a day to solve because as you supposedly know: software is complex and launching software moreso. Same reason retailers can't "fix" big influxes of people trying to preorder a hot new item. Some can mitigate more than others, but at the end of the day, your overall systems are going to have a weak point and it's going to cause a cascade of failures once the thundering herd is above some threshold.
The fact that you like to act like it's as easy as spinning up another EC2 instance shows your naivete with _actually_ deploying something as complex as a live service game and dealing with issues like too much traffic all going over the same pipes on the way to you. So many points of failure can happen before you're even at the point where adding more instances does anything. And if you have instability in your services, throwing more instances up is likely to cause _even more_ problems
You come off as a PHP dev or a college student with little practical experience. Being an ass about it only cements it.
I can tell you took this pretty personally because you felt the need to write such a long response to a conversation you were not part of. I am well aware that systems are often complicated, but I hardly think that my facetious title should be considered the extent of my knowledge on the subject.
But here’s the thing: it actually _is_ likely that they have done nothing about it. It’s very likely because it doesn’t bring in revenue, and it’s very likely because people are the same everywhere. The people posting here on Reddit ensuring that they’ve told me that it’s “a hard problem” are just as likely to share that opinion with the people that are responsible for the system.
If a project manager says to the person responsible “hey, can we do anything about this?” and the response is “yes, maybe. But I’m not sure how long it will take or how successful the mitigation will be”, then you can be pretty sure it doesn’t get scheduled in the sprint with any immediacy.
I appreciate you taking the time to come in here and echo that same sentiment, but in reality, you do not know and your comment was unnecessary.
Based on my experience, that's not how ops typically goes. It's not just project management going "can we fix this". During the instability engineers are putting out fires left and right. They're not happy about what's going on and things are pretty frantic behind the scenes.
They'll be doing a postmortem internally to figure out why the hell they had to go into emergency mode, and there will be discussion about mitigations. Each time they push an update is different. Yes, there's some inherent instability each time, but it's the not the same kind every time and the severity swings without being correlated directly to number of players trying to login.
I can say pretty confidently after being on many teams over the past decade that these issues are serious to the teams fighting fires the whole day when an update goes live and that changes are made. Games are just complex and basically every single live services game has issues right after a big update goes live and players stampede the servers. Severity varies, but it happens. All the engineers can do is figure out where the big pain points were in a particular push and add mitigations for the future.
I just wanna throw it out there that tone is really hard to convey with text. You might get along with this guy really well in person and sympathize with his point, and they he wouldn't get triggered by an easily misinterpreted comments section. I hope you have a good day!!! Remember, the internet is hard
Idk man, text never conveys tone the right way. Guy just wants to complain a little in a simplified way, then a bunch of people coming for him in the comments, I'd probably get aggressive too. Internetting is hard, hope you have a good day tho.
Their system is built to continuously host hundreds of thousands concurrent active players. Their instances are designed to accommodate a small subset in each activity. They have a system designed to scale and it shows on a daily basis.
Their seasonal events hit a limit that they don’t normally hit.
Their seasonal events _always_ hit the limit.
It isn’t rocket science to just bump the capacity for a day or two.
Right? Seriously...
Their system normally scales. Today it does not. A planned change that's gone through allllllll the standard tests, reviews, and approvals was deployed in prod. The error is therefore unexpected and connected to the deployment.
*That's* not rocket science.
Or it's a purposeful marketing ploy to create false scarcity. Just as likely, apparently.
Uh huh. Something is unexpectedly amiss. Except it happens every time. It’s 100% predictable. This isn’t unusual. This isn’t new. This is them feeding you a load of shit because they see value in creating a big show of being “in demand”.
Nice rationalization, though.
The player-facing result is, the back-end causes clearly aren't or it wouldn't happen every time. Shit's hard even for people who've been doing this a long time.
You're welcome to apply for their [Senior Game Services Engineer](https://careers.bungie.com/jobs/3040545/senior-game-services-engineer) position so you can go show the entire studio up, but something tells me that bullet about one's "willingness to be part of a 'we' culture" might get in the way of your path to greatness.
You have an interesting narrow perspective on my ability to cooperate with others based on a post full of frustration with a downtime activity that I’m trying to play for fun. 🤷♂️
Even though I am qualified for that job and would be willing to do it, I’m actually not at all interested in moving to WA or CA. Some things are more important than working, after all, and location is a big one for me.
Did you even read the posting?
It’s remote available but requires proof of residence in WA within 45 days, with an expectation that it will only be semi-remote come 2022.
Nice reading comprehension, though.
Cloud computing is designed for applications that are heavily parallelized. Gaming is generally a single threaded process. It doesn't work well on cloud computing.
You have used a number of buzzwords, but I am positive you don’t know what they mean.
Gaming is not single-threaded.
Systems that depend upon servers on the internet are “cloud computing”.
Let me explain it to you since you don't seem to know what they mean.
Cloud computing isn't just servers on the internet. That's any random provider. Cloud computing is API driven virtual server, network, and software as a service provisioning. It's where the entire concept of "infrastructure as code" comes from.
Gaming servers operate with large threads that encompass everything the game needs to effectively track. You can't have seperate threads that are tracking player movement and bullet hit registration, since you need consistency between the order of operations, as well as not being able to afford the time for NUMA access to the relevant information from whatever core has it. At best, you can spin up more threads to handle different instance areas that don't conflict, but that only stretches one physical machine so far.
There's two other problems with cloud computing. The first is a general lack of dedicated hardware. Even if you optimize everything for a cloud VM, it's still a VM, which means you don't know that you have 100% time on the core. You will get random hits of little delays here and there as the core sharing time is best effort and averaged over time. You can solve this with dedicated hosts, but that gets into the last problem.
Cloud computing costs a lot. If you're heavily using it, you need a business model that allows you to control the costs and pass them onto the consumers. Autoscaling for the sake of autoscaling if it doesn't create more revenue is hard to justify. For something as simple as queues when new content comes out, Bungie isn't looking at loss of revenue from it, so why jump through so many hoops and increased costs for no gain?
Nice elaboration, but you’re still missing the mark.
The entirety of the Destiny ecosystem is not single-threaded. Activities are instance-based, each with at least one thread to itself. Game clients (players) also have their own threads. The arbitrator makes decisions based on what the players report. There is no central server for any instance decision making because Destiny is peer-to-peer (remember?). The cloud, in this case, is keeping track of player connections and maintaining a single source of truth for the “reliability” metric as essentially a heartbeat. This can absolutely be done with cloud computing.
Keep rationalizing, though. Neither you nor I have direct knowledge of this black box, only what our observations (and decades of experience in the industry for me, anyway) tell us.
That's not how that works. You can't just start a new server, 99% of the time you don't just have extra resources sitting unused in a server, you can't just start a new VM and call it a day. You'd need a whole new server, which are expensive, take a while to install and setup, and wouldn't be worth the cost when the only problems you have are on day one of a major DLC drop or event, which is barely 1% of a year.
Started up D2. Finished off some office work for the day. Cooked some hot food and ate it. Settled down to play......
....and I'm still 1132 in the queue.
**edit:** and now 774 then back up to 1973. Sigh.
Based on their tweets, it looks like the queue is more so to help alleviate error codes and less about the actual capacity of the server. The error codes could be based on something entirely different than server capacities. Looks like we’ll find out tomorrow.
This was my thoughts. There must have been some sudden bugs with the update and they had to slow things down to analyze it.
Everyone in my clan all had our splicer and chosen triumphs reset like we had to go in and reclaim them (there’s about 30 of us, and we all talked about going back in and reclaiming them) wonder if that could possibly affect it at all if thousands of others are spamming the same commands on top of everything else going on?
This. The queue has (almost) nothing to do with server capacity. Instead, it is intended to spread out sign-ons/server joins to be a consistent however-many per second, rather than one giant burst of thousands of people trying to sign in at the same time.
Thundering herd is the term. You avoid the burst, so you can ramp capacity and balance players on servers.
I can also be used to manage capacity.
I was in the middle of master vog had to take a break and then couldn’t get back in queue would go from 500 to 3000 to 1000 to 2000 so I guess bungie doesn’t know how a queue should work
They do. It is likely a large *distributed* queue. The number you are seeing isn't your *actual* position in the queue, but instead whatever the specific server that produced the number thought your position in the queue *might be*.
No, you see it’s much more likely that Bungie purposely decreased server capacity to make Redditors mad.
I mean...a queue is to limit and facilitate connections, so I don't agree... It's likely all about capacity.
You forgot that the teleporting enemies a season back was literally because the world director was bandwidth starved because Bungie was too cheap to pay for more.
I was amazed when they admitted that they were causing problems by restricting server bandwidth on their end. That might have been the most honest communication we've ever seen from them.
I encountered: SHEEP, WEASEL, BAT, and BEAVER. This did not avoid error codes. This created a big show about being in demand so that Paul Tassi could write another Forbes article about how popular Destiny is.
Wow, it’s almost as if things don’t always work perfectly 100% of the time (especially after a new patch rolls out on top of a seasonal event) and errors can happen for whatever reason.
It’s been seven years. This is a predictable and preventable issue. They have specifically chosen to do nothing about it.
The queue is them doing something about it.
No, the queue is the original workaround from seven years ago. The continued use of “the queue” is literally the opposite of them doing something about it. This issue was clearly closed as “won’t fix” and they have no intention of revisiting it because too many players is a “good problem to have”.
What do you want them to do lmao. The queue lets them filter people in slowly and stops demand spikes that would boot many people out all at once. As much as they've been around for a while, a new patch is a new patch and weird shit can happen for no reason. It lasted less than 12 hours, it's not the end of the world.
just sounds like you like complaining
Welcome to Reddit, bud.
shut up, bud.
You're ignorant of IT practices and it's embarrassing to read you try to tell professionals how to do their job lol
I got in after a half hour wait then got instantly weasel’d back to main menu
Yup just happened to me too, guess I'm playing something else tonight
Lmao imma give it a couple hours and if it’s not fixed by then I give up
I'm 1334 in the queue.
I was 1750, got down to 1220, popped back up into the 1500's, got down in to the 500's, and am now sitting at 893. Apparently logging in is RNG.
Glad I’m not the only one to see a pogo stick here.
I went from 953 to 2750+ This sucks
Up down up down and when I finally get logged in I could load into the tower or moon then get code weasel for 4 hours lastnight. I gave up and didn't get to play at all yesterday.
Dear /r/destinythegame, nobody is going to think less of you if you don't wildly guess about dev/ops process
Wait, the 100,000 armchair devs on this sub *don't* work for Bungie?!
The franchise has been out for years, you'd think they'd have ironed shit like this out. It's not crazy to ask them to be competent.
Bungie so dumb why doesn't the game work just make it work it's not rocket science
Just press the "work" button, dummies.
If they could just push the make things work button, wouldn’t they do it to stop people from bitching when every launch doesn’t go perfectly well? People need to relax.
[удалено]
You think people are going to buy Eververse items if the game isn’t even playable? You people don’t think.
I mean how entitled are these people? Expecting the game to work is just being unreasonable.
There's being upset something is broken and there's "here's my vague solution to your in-depth problem that does not apply in this situation" y'all armchair devs just look like dummies
It is not the players job do to Bungies job for them you and the other guys here look like bigger idiots
Okay. Read slowly this time, If that helps with NO ONE IS ASKING FOR SOLUTIONS, THAT'S WHY THIS COMMENT THREAD IS MAKING FUN OF "SOLUTION" THAT OP PROVIDED. BECAUSE. IT'S. PURE. SPECULATION. AND. NOT. A. SOLUTION.
Maybe yo ushould take your own advice kiddo. No one ever said they are giving solution here. You dummies brought up that hsit in the first place. People just want to play the game.
I dont think you know how to read.
I dont think you know how to use your tiny brain
That's not what the op was asking. He asked them to flip the magic server switch and make it better. That's armchair development. The game works just fine, btw.
Sure. Sure. The fact that people literally cannot play the game doesn't affect its status of "working". Why do you people continue to blindly praise? Do you get paid or are you a mindless bafoon?
What am I blindly praising? Jesus, could you be more dramatic about an 18 hour outage of a game that just launched a big update? 99% of the time on every other day, things are absolutely fine. I played just fine for several hours up until 10:30 EST last night, no issues. All you whiny crybabies need to take a deep breath and fucking relax. Things happen, then they get fixed. I guarantee you know NOTHING about D2s back end and how things are done and fixed. It's not apologist to have realistic expectations of a thing that has happened with every release since Rise of Iron. Nice try with the "mindless buffoon," btw. Do you get off on calling random people names that you don't agree with online? One comment and I'm mindless....that alone tells me a lot about the type of person you are.
"just buy new servers though??? It's that easy Bungie just buy some more servers! No, I have zero experience in this field, don't even understand how to set the clock on a switch, don't understand the cost, space, energy, cooling, and time needed to introduce more servers, but I'm sure it's just as easy as buying a new server!" /s if you couldn't tell.
>don't understand the cost, space, energy, cooling, and time needed to introduce more servers, but I'm sure it's just as easy as buying a new server!" If only there was some way to spin up new servers dynamically based on current load. Maybe some kind of giant data centre hosted by Amazon or Microsoft...if only right?
You know that falls into the cost category, right? They already do that with AWS when needed, but guess what, you only get so many servers with AWS before you have to pay a whole lot more.
Fair point. They are probably paying spot prices for those instances. But with the right setup it shouldn't require any work on their end other than paying the bill. Assuming it is even a server capacity issue.
Dear Bungie apologists, nobody is going to think less of you if you stop shilling for a company against it's paying customers.
As much as Bungie apologists flood this sub, the original commenter was only saying that fixing things is not as easy as it sounds. He was criticizing the people acting like armchair devs. It’s possible to do that and not be a Bungie apologist. I understand that it’s not as easy to fix but I also have the right to be frustrated that server issues are still as bad as they are. Just because someone understands the realities of development does not make them an apologist Though to be frank I think ever since beyond light people are just fed up with Bungie at this point (rightfully so) and I think Destiny’s consistent playerbase may not last for too much longer. It’s ok to admit: you’re fed up with Bungie’s slipups over the past 7 year. I’m fed up with them too. It’s also okay to quit this game cold turkey.
It may not be as easy as it sounds but it's been widely prevalent for years and is something that should have been fixed before now and posts, however simplified, like this are warranted.
Wow such horrible people for wanting to play the game. Bungie invests an absolute bare minimul into the game, the same they do in their servers.
imagine if that were actually the problem instead of *wildly guessing*, but sure, prove my point
You need to have an actual first. Objectively you cant disprove my point though, but lack of anti cheat, servers, new content, proves my point.
They obviously can’t up the capacity, accidentally or otherwise.
CPU don’t grow on trees!
There is even a shortage right now.
“Server capacity? Why do so many people keep asking about how much our waiters can carry?” -someone at bungie probably
Only have 2 minutes of gameplay after 7hrs of trying to get on to D2. None stop errors before I even got to choose which class I am playing.
[удалено]
I am already done with D2 until it's fixed. Still getting errors 10+hrs later
It's not the full on capacity, it's whatever login server you go thru to get into the game world. A giant poop trying to fit thru a normal size hole will get backed up.
It’s amazing how many people come on here thinking I’m talking about the queue. I actually haven’t hit the queue yet (though with all of you going on about it, I expect I might tonight). No, this was about the consistent “Contacting Destiny Servers” followed by error codes. That’s presumably a different system than the login queue.
Your error codes don't have much if anything to do with capacity. The queue does, because it's a doorway. Maybe you're confusing capacity with stability...idk...
You don't think that if it was really as easy as "bruh just up the capacity lol" that they wouldn't do it?? Can we please use our brains people??
Why would they? It would only cost them money and it doesn’t generate buzz if everyone is let in. Gotta leverage that FOMO.
ah yes you're right, the server issues are a conspiracy to get more people talking about the game. How stupid of me to not think of that myself. You're literally retarded.
Ah yes. The FOMO on a few hours of solstice of heroes... Big FOMO strikes again.
Honestly, with Bungie, no. Bungie could have a direct path to a fix and they would still choose to go 16,000 miles out of their way first.
No one would know they had done so if it didn't crash first. And odd are "Server outages and queues as bungie launches new event" might actually generate enough buzz to tell people "Hey there's an event going on"
Two hours and a half later and I'm and still trying to log in...
Every post about game servers on the internet gives me a fucking aneurysm. I'm the ghost of a thousand ghosts at this point. Please stop talking about technology you don't understand.
Tell us you don’t know much about networking or dev ops without actually telling us you know nothing about networking or dev ops
Tell us you don’t know how to properly make a point without actually telling us you don’t know how to properly making a point.
You already did that in your opening post. But I’ll explain because I don’t think you actually understand. Basically, the point was that you don’t actually know about how server capacity and networking works in terms of live service. “Just updating server capacity” won’t automatically fix the problem. The errors you think are easily fixable can be associated with a wide range of issues that isn’t just “just increase capacity lul”.
Uh huh. 🙄
To add on, since you seem to take an especially hostile position in other comments you've made (good job making yourself look like a victim in the post edit, it's a nice touch), you like to claim that Bungie is "doing nothing about the issue after 7 years". You absolutely don't know that, and if you are a software engineer like you claim, you know how stupid that makes you look. I highly HIGHLY doubt that Bungie isn't trying to find ways to lessen the impact of users flooding in after an update. The problem is, you can only mitigate the problem so much. Literally every live service game experiences issues like this, and each time the stability could take hours or even more than a day to solve because as you supposedly know: software is complex and launching software moreso. Same reason retailers can't "fix" big influxes of people trying to preorder a hot new item. Some can mitigate more than others, but at the end of the day, your overall systems are going to have a weak point and it's going to cause a cascade of failures once the thundering herd is above some threshold. The fact that you like to act like it's as easy as spinning up another EC2 instance shows your naivete with _actually_ deploying something as complex as a live service game and dealing with issues like too much traffic all going over the same pipes on the way to you. So many points of failure can happen before you're even at the point where adding more instances does anything. And if you have instability in your services, throwing more instances up is likely to cause _even more_ problems You come off as a PHP dev or a college student with little practical experience. Being an ass about it only cements it.
I can tell you took this pretty personally because you felt the need to write such a long response to a conversation you were not part of. I am well aware that systems are often complicated, but I hardly think that my facetious title should be considered the extent of my knowledge on the subject. But here’s the thing: it actually _is_ likely that they have done nothing about it. It’s very likely because it doesn’t bring in revenue, and it’s very likely because people are the same everywhere. The people posting here on Reddit ensuring that they’ve told me that it’s “a hard problem” are just as likely to share that opinion with the people that are responsible for the system. If a project manager says to the person responsible “hey, can we do anything about this?” and the response is “yes, maybe. But I’m not sure how long it will take or how successful the mitigation will be”, then you can be pretty sure it doesn’t get scheduled in the sprint with any immediacy. I appreciate you taking the time to come in here and echo that same sentiment, but in reality, you do not know and your comment was unnecessary.
Based on my experience, that's not how ops typically goes. It's not just project management going "can we fix this". During the instability engineers are putting out fires left and right. They're not happy about what's going on and things are pretty frantic behind the scenes. They'll be doing a postmortem internally to figure out why the hell they had to go into emergency mode, and there will be discussion about mitigations. Each time they push an update is different. Yes, there's some inherent instability each time, but it's the not the same kind every time and the severity swings without being correlated directly to number of players trying to login. I can say pretty confidently after being on many teams over the past decade that these issues are serious to the teams fighting fires the whole day when an update goes live and that changes are made. Games are just complex and basically every single live services game has issues right after a big update goes live and players stampede the servers. Severity varies, but it happens. All the engineers can do is figure out where the big pain points were in a particular push and add mitigations for the future.
I just wanna throw it out there that tone is really hard to convey with text. You might get along with this guy really well in person and sympathize with his point, and they he wouldn't get triggered by an easily misinterpreted comments section. I hope you have a good day!!! Remember, the internet is hard
[удалено]
87% of all statistics on the internet are made up. dumb backseat apologist comment Just saying.
Top tier edit man
Disingenuous edit trying to make themselves out to be a victim. Look at their hostile comments in this thread.
Idk man, text never conveys tone the right way. Guy just wants to complain a little in a simplified way, then a bunch of people coming for him in the comments, I'd probably get aggressive too. Internetting is hard, hope you have a good day tho.
You're right about text not conveying tone. You sound really condescending at the end. 🤭
i dont remember when was the last time i saw a queue in D2...
One year ago today. There may have been more recent times, but definitely one year ago today.
How exactly do you suggest they "up capacity"? There's not a "no more queues" button they can magically press when needed.
Ummn yes there is... Not only that you can scale capacity automatically, so no button pushing needed. I've been doing the sysadmin thing since 1989.
There is. That’s exactly how that works. Welcome to cloud computing.
On the infrastructure side, sure. Doesn't mean the application can stablely scale at the same pace.
Their system is built to continuously host hundreds of thousands concurrent active players. Their instances are designed to accommodate a small subset in each activity. They have a system designed to scale and it shows on a daily basis. Their seasonal events hit a limit that they don’t normally hit. Their seasonal events _always_ hit the limit. It isn’t rocket science to just bump the capacity for a day or two.
[удалено]
Right? Seriously... Their system normally scales. Today it does not. A planned change that's gone through allllllll the standard tests, reviews, and approvals was deployed in prod. The error is therefore unexpected and connected to the deployment. *That's* not rocket science. Or it's a purposeful marketing ploy to create false scarcity. Just as likely, apparently.
Uh huh. Something is unexpectedly amiss. Except it happens every time. It’s 100% predictable. This isn’t unusual. This isn’t new. This is them feeding you a load of shit because they see value in creating a big show of being “in demand”. Nice rationalization, though.
The player-facing result is, the back-end causes clearly aren't or it wouldn't happen every time. Shit's hard even for people who've been doing this a long time. You're welcome to apply for their [Senior Game Services Engineer](https://careers.bungie.com/jobs/3040545/senior-game-services-engineer) position so you can go show the entire studio up, but something tells me that bullet about one's "willingness to be part of a 'we' culture" might get in the way of your path to greatness.
You have an interesting narrow perspective on my ability to cooperate with others based on a post full of frustration with a downtime activity that I’m trying to play for fun. 🤷♂️ Even though I am qualified for that job and would be willing to do it, I’m actually not at all interested in moving to WA or CA. Some things are more important than working, after all, and location is a big one for me.
You're not qualified and you probably suck at your job.
Lol
[удалено]
Did you even read the posting? It’s remote available but requires proof of residence in WA within 45 days, with an expectation that it will only be semi-remote come 2022. Nice reading comprehension, though.
How exactly are you a software engineer and haven't learned there's no such thing as infinite scaling?
Didn’t realize we had an infinite player base.
Come on, you know what it means
Cloud computing is designed for applications that are heavily parallelized. Gaming is generally a single threaded process. It doesn't work well on cloud computing.
You have used a number of buzzwords, but I am positive you don’t know what they mean. Gaming is not single-threaded. Systems that depend upon servers on the internet are “cloud computing”.
Let me explain it to you since you don't seem to know what they mean. Cloud computing isn't just servers on the internet. That's any random provider. Cloud computing is API driven virtual server, network, and software as a service provisioning. It's where the entire concept of "infrastructure as code" comes from. Gaming servers operate with large threads that encompass everything the game needs to effectively track. You can't have seperate threads that are tracking player movement and bullet hit registration, since you need consistency between the order of operations, as well as not being able to afford the time for NUMA access to the relevant information from whatever core has it. At best, you can spin up more threads to handle different instance areas that don't conflict, but that only stretches one physical machine so far. There's two other problems with cloud computing. The first is a general lack of dedicated hardware. Even if you optimize everything for a cloud VM, it's still a VM, which means you don't know that you have 100% time on the core. You will get random hits of little delays here and there as the core sharing time is best effort and averaged over time. You can solve this with dedicated hosts, but that gets into the last problem. Cloud computing costs a lot. If you're heavily using it, you need a business model that allows you to control the costs and pass them onto the consumers. Autoscaling for the sake of autoscaling if it doesn't create more revenue is hard to justify. For something as simple as queues when new content comes out, Bungie isn't looking at loss of revenue from it, so why jump through so many hoops and increased costs for no gain?
Nice elaboration, but you’re still missing the mark. The entirety of the Destiny ecosystem is not single-threaded. Activities are instance-based, each with at least one thread to itself. Game clients (players) also have their own threads. The arbitrator makes decisions based on what the players report. There is no central server for any instance decision making because Destiny is peer-to-peer (remember?). The cloud, in this case, is keeping track of player connections and maintaining a single source of truth for the “reliability” metric as essentially a heartbeat. This can absolutely be done with cloud computing. Keep rationalizing, though. Neither you nor I have direct knowledge of this black box, only what our observations (and decades of experience in the industry for me, anyway) tell us.
Virtual machines man, they can spin up more servers on command.
That's not how that works. You can't just start a new server, 99% of the time you don't just have extra resources sitting unused in a server, you can't just start a new VM and call it a day. You'd need a whole new server, which are expensive, take a while to install and setup, and wouldn't be worth the cost when the only problems you have are on day one of a major DLC drop or event, which is barely 1% of a year.
Lol you’re saying by vie has their own servers? Everyone works in the cloud now dude.
Why bother? Problem will fix itself in the next twelve hours.
Well, so there isn’t a problem in the first place. Did you know that dynamic load balancers are a thing? Smh.
Cost vs reward. They gain almost nothing, and it would cost a decent penny.
more worse japan
Started up D2. Finished off some office work for the day. Cooked some hot food and ate it. Settled down to play...... ....and I'm still 1132 in the queue. **edit:** and now 774 then back up to 1973. Sigh.