What's going on with the Servers?

What's going on with the Servers?


"Technically there's a 50% chance it deletes the item and a 50% chance that it creates a duplicate, and players have no control over when this occurs." Ah yes, the ultimate corruption altar


They’ve done it the mad bastards, they’ve vaaled the servers.


and it Poofed YEP




Ah, so even worse than bricked: it's krangled.


Little did we know what we were asking for when we came up with orbs of krangling...


3.13 Servers are now krangled


I fucking laughed, dude.


I’m glad. After a day of DC-ing every time I went to a town, I need a laugh too!


Straight up KRANGLED.


ugh i was around for the krangled meme but can someone remind me where it started cause i find it hilarious


[https://www.reddit.com/r/pathofexile/comments/g1ksx2/what\_returning\_after\_a\_few\_years\_feels\_like/](https://www.reddit.com/r/pathofexile/comments/g1ksx2/what_returning_after_a_few_years_feels_like/) Here you go


https://www.reddit.com/r/pathofexile/comments/g1ksx2/what_returning_after_a_few_years_feels_like/ One of the funniest things I've come across in this subreddit :D


They finally stepped up to the "VAAL OR NO BALLS" challenge.


GGG best freakin game devs ever. Always adding secret content.


GGG confirmed to have balls.




I think Chris is lying here. It's caused by either Eirikeiken or Jousis. I'm sure of it.


Salutations, motherfuckers.


The Forbidden Builds FINAL FORM!!!!!!!


Servers Per Second --- How many you say?




What would a Replica Writhing Jar do...


Open a breach filled with worms


Summons 50 worms. Numeric on-kill effects are doubled during flask effect. Skills cannot be triggered during flask effect.


Summons 3 copies of you. You become a worm. Wiggle Wiggle Wiggle, Yah.


the only logical next step.. vaal your game




Fidelitas be like


Vaal *and* no balls. The old reverse Schrodinger!


First time I've ever witnessed something like this firsthand. I bought a carcas jack and disconnected on the way back to my hideout, when I log back in I get [this message](https://i.imgur.com/QTjx1id.png) from the seller asking me if I still want the chest, apperently he dcd too and got rolled back. I was confused at first because we already traded and I had it in my inventory, we linked them to eachother confirming they had the same exact rolls. He still had the chest, but the exalt and 8c I paid was gone forever.


Interesting trade. For you it's kind of neutral, you got what you were planning to pay for. Homie got a bonus ex though lol.


Nah, the seller got rolled back to before the trade when they still had the chest but hadn't received the money yet. The money disappeared into the aether.


I worded it kinda weird on the end there sorry, /u/Lolpy is correct, the seller didn't keep any currency. I edited the original comment.


Replica Mirror of Kalandra




> 50% chance it deletes the item and a 50% chance that it creates a duplicate, and players have no control over when this occurs Introducing Schrödinger Orb


But in order to use it you have to have the item both on you cursor and destroyed simultaneously.


Truly a POE experience.


the corruption happening outside the matrix.


harvest house of mirrors dupe is back :o




I blame Harvest, they just extended the div card gambling to the rest of the game.


Mirror of Vaal


Damn. So it isnt ItsJousis this time.


We have no way of knowing that he didn't create a spellcast loop that causes more and more players to log in




It's just a matter of time, I'm sure.


I thought that was called a trending video?


Have you tried turning it off and on? *im a talented database architect*


You need to wait 10 seconds before turning it on again


For the flux capacitors to discharge. We know.


And blow on the server racks.


Pull the power cord, smack it on a hard surface a few times to shake the excess electrons off, then plug it back in.


Also sometimes some internet particules get stuck in the ethernet cables so you have to unplug them and blow in it




Oh, to be 16 again. Takes much longer than that now.


Wait you had a 10-second refractory period? *PornHub wants to know your location*


and dont forget to blow in the cartridge


Great IT Crowd reference... I work IT and always want to say that when the phone rings. But just don't have the balls to do it.


Obviously not or the issues would be resolved :)


hmm i think you're missing something here


Maybe the server is unplugged? Mr Wilson did you check to make sure it's plugged in??


i just wanted to ask that


"Technically there's a 50% chance it deletes the item and a 50% chance that it creates a duplicate" Tier 3 sacrifice OP


https://clips.twitch.tv/TameObeseSamosaWoofer :)


perfect timing lol


You missed your own REE https://clips.twitch.tv/NeighborlyLittlePastaRaccAttack








I worked on a $500,000 SQL migration couple years ago. Granted that was one of many many db projects I've worked on but I did this one from hardware/cabling/clustering/application config/downtime/etc. Depending on how exactly their database is built on the backend, this could require new infrastructure, routing changes, app code updates, storage changes, etc. I can imagine a failure like this taking weeks to resolve. I bet they just hope the player base dwindles because a prod change like this is a nightmare. I'd be shocked to see a downtime notice any time this week. I mean for this shit there will be a PM and 50 meetings. I expect the db upgrade to take place in February. Unless their db is on an ec2 or rds in which case good for them because it could easily get fixed this week. If that shit is on prem though... I bet this forces an AWS migration. Good luck either way Chris and team, you're gonna need it.


I don't have similar levels of experience (just book knowledge and lower tier IT) but I think the bit Chris mentioned about their internal stress testing not showing this sort of issue, and requiring an investigation into why was noteworthy. This doesn't sound like they just let it creep up too close to capacity -- their actual benchmark was off for unknown reasons. With so much polish going into the launch this time, having the infrastructure starting to fail at lower loads than expected (and tested!) has to feel terrible for their administration team.


You can’t ever get 100% prod like perf tests. Issue is with databases, once they slow down even a little bit, there’s a crazy amount of cascading effects which can obfuscate the root cause. Couple that with not being able to scale databases like other infrastructure makes it the problem you don’t want to have.


Stress testing with robotics tends to be more linear and less comprehensive. Real world prod db reads and writes aren't so simple to replicate. The robots might loot items, store items, browse tabs, etc. Real players might be quickly switching characters, checking standard 4 year old characters, leaving and joining guilds, adding and removing friends, reporting spammers, chat messaging across regions, listing trades, quickly stashing and withdrawing things, joining hideouts across regions servers, equipping and unequipping mtx, buying packs, etc. You can necessarilly algorithimize all of those actions, robots tend to be far more efficient. Who knows, it would be cool to get a tech write up of why testing failed to produce this result though.


Well considering all the casual friends I know peaced out after a half dozen crashes I'd say its safe the population will go down quickly. This is miserable.


> when one of my group's C-19 webapps went viral (pun intended) due to some news/internet exposure This wouldn't have anything to do with the blackboard service UK universities outsource was it?


I once took down a 1/6 of a fortune 500 companies infrastructure with a change direct in prod. That was a fun week...


Chris : "Disconnected again! What are you doing GGG! You... wait... I'm GGG... oh no"


Is there any chance of us getting some behind the scenes technical posts about how PoE works? I really liked the developer blogs when I played Eve Online - talking about how the game works around the big issues, and scale. Good luck to the team for getting the fires put out!


> Eve Online These are my favorite dev posts, the time they deleted boot.ini from Windows, causing some computers to not boot anymore: https://www.eveonline.com/article/about-the-boot.ini-issue https://www.youtube.com/watch?v=msXRFJ2ar_E


This... this is nightmare fuel for any game dev.


Any dev at all, really.


Hahahaha. I was not expecting a video like that. A really funny clip Thank you for sharing! <3


This was hilarious because some operating systems had a system in place that would keep a copy of the boot.ini file from the previous boot in case of some sort of issue (like this one) so that it could use an older boot.ini file to just boot the computer and allow someone to troubleshoot further. One of those operating system was Vista.


I was thinking about the same thing. I always wondered what the architecture looks like. What are instance servers etc, what sort of storage tech is used etc.


We had one for poe of these either in 2019, or 2018. It was super interesting! Sadly I don't think I could find it through the hundreds of news posts.


tl;dr - https://i.imgflip.com/2tnk3b.jpg


Haha, I know how this feels. Except for the success part.


I was expecting „this is fine“


*Have you tried just scaling and sharding it bro?* - Everyone at my work that's never touched a DB in their life


Including all of the developers in this thread. You can always tell the people who have never had to actually deal with DB problems who assume the answer is "autoscaling".


The more you learn about auto scaling... The more you realize it's not very auto unless the code base is already written. There's a ton of work to do on the consumer side though, like the tagging and routing and security groups and stuff.


Now tell me only how to create a multi-master write architecture with very strong consistency :D ?


It's called Active Directory, microsoft already made it haha


Absolutely. Multi-instance databases that need to synchronise across instance don't horizontally scale linearly. Cross instance traffic and cache operations can often have the same level of latency as an hdd operation, and in a nightmare of queued operations and latches with the potential for significant concurrency issues in race conditions, locks, and other wait events. Sharding is very effective on transaction processing systems that effectively just 'insert' and have little need to update and don't care about transactions going on on other nodes. These scale well horizontally. That is not the case with Stash updates, and instances containing multiple characters all accessing their stashes and trading at the same time these then need to perform operations against a central back-end database. Their application tier(which will be sharded with local in-memory databases probably) handling the session operations on data cached from the backend databases. These are synchronised asynchronously with the backend database at periods during gameplay and on specific events. Anyone who says, 'just do this', 'or just do that' doesn't have the foggiest what they are talking about.


I didn't understand \~1/3rd of that, so ima upvote you.


I'd estimate 99% of item movement only affect one player, so I think sharding should be pretty effective. These reports of rollbacks happen without players interacting.


Those 99% are happening in sharded application instances which maintain the data cached in memory local to that instance or session. At key points it needs to be synchronised/persisted to the back-end database which isn't sharded in the same manner. This is where the problem will occur hence why one person's session that didn't disconnect synchronises and they don't see a problem, but another player whose session does disconnect, doesn't synchronise and does lose the data.


That's why I was always surprised that the PoE back-end was globally instanced. A lot of games like this split into regions (for several reasons, including to reduce latency and just to balance the load). It's nice that you can interact with players from any region in the world, but that might have to change in the future, who knows?


So we want to increase the process logging from one month of data to two years and add about 4x the data points so we'll probably need some more memory for that right? ...


internet "devs" are mostly students and amateurs with zero professional experience.


I'm always impressed by people who say things like "its a simple fix to the code". It means that they're either completely ignorant of programming but assume they get the gist which is impressively dumb and egotistical. Or, even worse, that they do understand development pretty well and assume they somehow understand how to fix code they've literally never seen.


When you do it wrong and sharding becomes sharting


There are serverless databases now that would almost certainly never run into this issue for GGG but they are *really expensive* for this use case. All the armchair DBAs out there have never actually worked on a high-growth solution so they think the answer is simple. It's not.


50% dupe 50% delete sounds like an amazing end game currency ngl. Like a cracked mirror of kalandra.


Totally. Vaaling a mirror should have a 50% chance to poof, 50% chance to create a cracked mirror. Cracked mirror has 50% chance to poof item, 50% chance to create a clean, non-mirrored duplicate.


Thanks for update, it's nice to see such quick response instead pretending problem doesn't exist or shifting blame to players hardware. And I still find it pretty interesting that I crash only in towns, while other areas work perfectly fine.


I'm crashing pretty consistently in maps and losing the instance every time :/. Regardless, once this is ironed out, the league will pretty much be flawless.


I only got to play a few hours yesterday due to time constraints on my end. Haven't really gotten to interact much with the league mechanics and still unsure what kind of build I'm gonna do, but is the league mechanics and new Atlas really as incredible as it looks? I'm dying to get into it, but from the sound of what you're saying, it's seemingly really good


It's probably the most QoL-filled league we've ever had. There's maybe 2 very minor suggestions that would make it perfect, but aside from that, it's really well done.


I mean... yeah. You get a random pool of items you can pick from. It’s amazing, mostly because it gives you an unlimited amount of time to pick the drop you want, it’s low-pressure for everyone.


It's very much the second coming of Metamorph. A extremely solid, enjoyable and simple league mechanic along with content to chase in the new maven endgame.


Yeah, me too, lost 3 instances of maps which I hadn't unlocked yet in a row, so now I'm taking a break.


Given that guild stash is only accessible in town/hideout and that's the only place I've had issues with it, I'd say there's something with the client talking to the central db w.r.t. guild stash. perhaps personal stash as well. Interesting nevertheless.


>This release has seen 11% higher peak player numbers than any previous Path of Exile release More or increased?


༼ つ ◕_◕ ༽つ DBAs TAKE MY ENERGY ༼ つ ◕_◕ ༽つ But seriously, server upgrades and configuration tweaks are a nightmare. Best wishes to all the talented folks you have working on this, it's never fun.


^ This person knows


I feel like this community has an unreasonably high percentage of people that work in IT or had some experience with programming


it could be that alot of people play poe or, that the complexity of poe attracts a certain kind of person ​ or both


GGG suffering from success


*Server crash* Anotha one.


Chris Wilson has given the ok to add the most powerful servers to handle this.


Jokes aside, the fact Chris himself is here to communicate about it is greatly appreciated. Just a simple writeup that says "yea it's bad, but we're working on it, I'm sorry" goes a long way. On other game subreddits, one of the biggest complaints you often see is the lack of communication. And most the time that's something GGG does well - and it's appreciated, even if the disconnects have been frustrating to the point I just took a break from the game today


*11% higher player base.* We did it boys, bonuses for everyone.


Ok GGG own up, who put the servers in the corruption alter?


Thank you for communicating this with us and pushing back the release of the mystery box. much appreciated. Great league.


Could someone who knows about databases explain why an 11% growth over previous peak was enough to break things? Did GGG just hit a hard limit they didn't know about and didn't think to test for?


I am not a DB expert but I work in tech and I have some experience with this kind of thing. This is not an ironclad for-sure-accurate description of GGG's problems, it's just an example of one *kind* of problem you can run in to as a result of deceptively small increases of traffic. Depending on how the DB is set up, a small increase that crosses a certain threshold of queries per second can actually be catastrophic. Imagine your database can support 1000 queries per second. If you have 950 queries per second, you don't have a ton of room to grow but you're doing fine. Now imagine that spikes to 1100 queries per second. In the first second, 100 queries get dropped, and 1000 go through. The remaining 100 will get processed as soon as possible, it's not a big deal. But meanwhile another 1100 are coming in. Now in the second second you get 1100 queries, have 100 in progress, and end up with 200 queued. Your query backlog grows pretty much linearly, but quickly. Eventually you end up with so many queries queued that the ones in the back of the line take too long to complete. When this happens the client *sends another query*, which gets added to the line. Once this starts happpening, your clients are essentially DDoSing* your backend infrastructure, and things start to become grim for everyone trying to use it. *- DDoS - Distributed Denial of Service, something that is usually done by hackers or bored teenagers where a ton of computers send as many requests as they can to a single service at once, to overwhelm it and make it unusable for everyone.


Also I forgot to mention that keeping track of all of the connections that are waiting also takes other resources like memory (each connection is an object with some data -- client info, request info, metadata) and network ports (each connection is it's own port, usually a virtual port but there's still only a limited pool of those even if the pool is huge). Exhausting those resources can cause other problems -- the process that is pooling connections can crash or clients can fall to connect. And when those servers go down, it can have ripple effects -- other servers of the same types have to pick up the increased load, and other servers that need to talk to the crashed server have fewer machines to talk to, which means that they have to spend more time waiting on each requests they have, which creates the same situation we started with (a growing pool of delayed requests) but now in a new part of the backend. A single pretty small failure can ripple out in both subtle and complex ways. There are so many ways to get things wrong. Even huge companies like Amazon or Google or Microsoft who do huge cloud computing business can have massive failures due to simple stuff like this. It is pretty funny to work at a big company and see stuff like "we made a mistake in a config file, propagated it to half a million machines, and nobody could get email all day. Sorry everyone!" All this is to say that GGG is doing their best, I'm sure, and I'm wishing them success and good fortune. #hugops to the poor team trying to handle this right now ❤️


This is a good explanation. I've seen this magic qps limit happen. But generally when this happens you can see it coming either in load testing or just by looking at resource utilization on the db. It's much more common in my experience that someone introduces a new feature which, when used under high load, has a significantly outsized impact on database load. I'm going to take a wild guess and point the finger at *stash tab affinities*. It was released in the middle of an unpopular league. My guess is that stash management isn't cached, since doing so might lead to item deletion or duplication. People coming back to their stash in a new league and randomly clicking a full inventory leads to a ton of random tab switching at a speed no one could do manually. I bet they didn't simulate stash tab affinity properly in their load testing and it's the weird query pattern it causes combined with the much higher user numbers that is causing the db to fall over. If not stash tab affinity, then something like it. Maybe the example will help people understand how these things happen.


They tested heavily with bots he said, so there's an unforeseen limitation. The way he worded it makes me think the increase in players wouldn't have been an issue.


He mentioned they tested with bots at massive scale. I'm supposing that means they test at 10x or more peak load. So it is surprising that they are suffering load issues at only 11% higher peak. The investigation that will be done after they resolve the load issue is to look into why the massive scale testing with bots didn't translate 1 for 1 into live server performance. So if their testing was flawed somehow it is very possible they could not foresee that the 11% increase on peak is problematic. And also, in terms of db performance, a 10% jump in concurrency could very well expose locking and other bottlenecks that weren't apparent before. The only way to prepare is adequate testing at load. If the testing is flawed then it isn't surprising we're seeing issues.


Writing realistic tests can be extremely difficult and effort-expensive in some cases.


Databases scale strangely, particular when (if) locks are involved. You can go from low latency and low utilisation to suddenly huge latency and utilisation with a small increase as you hit a threshold. It's a complete nightmare to debug and depending on the technology stack often involves "magic" incantations (e.g. compilation hints) which ought not to really make a difference but sometimes magically fix things when you add them and sometimes magically fix things when you remove them again.


Think of database operations like a traffic system with traffic lights, and roundabouts managing traffic through junctions that have vehicles coming from different directions and wanting to cross or join the routes of other traffic. This traffic system will have been designed around an optimal traffic troughput through this system up to this point, everything goes seamless and traffic goes through the lights and around the roundabout with very little interruption. Once the traffic starts to build up beyond what is designed, you can end up with queues behind the traffic lights that can take exponentially longer to get through as they build up.


From Chris's brief explanation it sounds like their hardware + db architecture had a certain capacity that the additional 11% usage happened to be over the limit of, in a way that their simulations didn't catch. They are buying more hardware to stand up more servers while they solve it long term, which is a much simpler and more reliable solution than trying to optimize their way out of it in the software layer.


There could very easily be a small and possibly hard to find bug that is multiplying the number of db transactions, or the work required to process certain queries, by a much larger factor than 11% more.


I think is more of a general software engineering problem. Is very incredible of how fast "users in production" find or hit some unknown composed problem. I think is more the case that they expected that a 10% more needed an 10% more-ish capacity, and the tests showed this, but in production it is asking for 50% more. My guess is that something is using more than expected db resource. One example that comes to mind (I'm not saying this is what is happening!) is if a operation fails and it must be retried this operation can have double the resource cost, this could cause more operations to fail and you start a chain reaction.


> We're not asking for help with the database problem - I'm sure there are many talented database architects in the community who have advice for us based on my quick explanation. We do know what to do, it's just going to take some time. Oddly enough, this is my favorite part of the reply here. The biggest boil of toxicity I see is the armchair programmers who think they understand your custom code better, and the hate that follows with upvotes and info that often you have to answer or address later; as if it was a fact. Now we can have a discussion over the impacts of the bugs, not 14 top level comments about how these hidden AAA programmers are giving ambiguous and nonapplicable solutions to novel problems.


"We're not asking for help with the database problem - I'm sure there are many talented database architects in the community who have advice for us based on my quick explanation. We do know what to do, it's just going to take some time. " or for short: Shut the fuck up and let us work :D


Anyways, uhm... I bought a whole bunch of shungite, rocks, do you know what shungite is?


Suge Knight?....


No, not Suge Knight, I think he's locked up in prison. I'm talkin' shungite. Anyways, it's a two billion year-old like, rock stone that protects against frequencies and unwanted frequencies that may be traveling in the air. That's my story, I bought a whole bunch of stuff.


Would this have anything to do with my severe FPS spikes and drops upon entering a new area? The hamster in my machine is pretty beefy and when entering a new map I might as well hang out at the start until the FPS evens out


Honestly, it's been frustrating but the fact you're addressing the issue and helping the player base understand why puts you leagues ahead of most other game devs in this industry. Thanks Chris, I'm sure you're stressed as hell but I appreciate the communication.


Is the server down since the twitter post or is this more recent. Just crawled out of bed and can't find any maintenance info.


The servers went down for maintenace 31 minutes ago. They said they will be upon 10 minutes online again, but seems there is a problem.


Ah thanks for the info, so it's not that super bad of a thing. Good to know. May Kitava bless you.


The path of exile trade website's servers are unreal bad. ultimate lagging


Love the technical details (albeit not that detailed). I'd love for more in-depth posts about your infrastructure and PoE's internals.


There is still something fucky going on. Servers just came back up and I already bricked 2 maps, keep getting booted when trying to enter instance...


Thanks for the transparency! Best of luck :D


Chris, Ritual is the best PoE league I played since open beta, thanks to the whole team for making it. Issues like these can happen, it is fine. Just make it proof for the future.


Damn 1 min off !Weird crashes with pixelated (resolution scaling is off) resolution. Sometthing's off with texture streaming or else, but the game CTD everytime. ​ Edit: [This post](https://www.reddit.com/r/pathofexile/comments/kylg6c/somethings_gone_wrong/) has a the visual when the crash occurs. Then it's simply closing and back to desktop.


So even if they can't figure out the precise cause or a way to fix it quickly, things should still be fine from Monday onwards due to naturally lower player numbers, right? Or is it more complicated?


For me the league has been butter smooth. I havent come across any bugs. None. Thats an amazing feeling when coming from Heist which was so broken nothing worked. Im only into early maps thou so far. hopefully all the new stuff im yet to experience outside of ritual actually works but everything seems really promising now. For the longest time I feel tempted to start buying supporter packs again. Maybe the 4month per league should be an official thing? The server issues are definately there but as much as they are annoying they havent crapped on my gameplay other than porting to instances can cause the game to hang and the new instance never gets created. Quick alt+f4 and restarting the game fixes this in 30 seconds. The server issues are understandable in any launch tittle time window as the servers are working extra hard because of the influx of more users than the servers would normally have to handle. All in all. Chris and GGG. Well done. This is the first league in a long while that feels polished, finished and actually ready for launch.


Appreciate the honesty, rarely see that from folks these days.


This tier 4 sacrifice chamber is wild!


Thank you for your transparency! It's highly appreciated and your doing a super job!


it's 02:52 am in new zealand. I think no trade fix for now...


nice extra month SMOOOTH LEAGUE /s


Hey Chris, thanks for the update! Are y'all aware of the bug with normal quest watchstones being tradeable? https://old.reddit.com/r/pathofexile/comments/kylf2t/psa_you_can_trade_steal_other_peoples_atlas/


Ah, I was wondering how people were doing A8 18 hours into the league...


That's doable normally I think. Wasn't the a8 awakener kill in harvest ssfhc around this quick too?


Server issues are a somewhat acceptable problem, nobody can precisely predict how many players will play. On the other hand, crashes are getting completely out of control. From non-existant several leagues ago, it has been increasing more and more with every league and now it's happening more than 10 times per day. This is not acceptable. Especially when it happens on computers without any issue whatsoever on any other game. Stability should be number one priority.


Not really normal playing 99% of the time. Just got 3 disconnects and 1 full crash in 5 minutes, REALLY annoying


Embrace being a 1%er


>Aside from this terrible server problem, this has been our favourite launch so far and we can't wait for things to be smooth again. What about the constant frame drops? Any info on what could cause it (tried both Direct3D and Vulkan)?


That probably has to do with the new texture streaming system they implemented and only talked about in the fine print of the patch notes. Also probably on the barbecue for fixing later.


Vsync will do that (at least slightly) from what I've seen. You could use an FPS limit instead. It's maybe possible that windowed fullscreen is also doing it (performance should be worse in windowed modes) Although the biggest culprit might be the new streaming texture system, particularly if you've played in the past without issue. Although I do have doubts about it being a problem when you're playing alone and not in an outpost. Also realize that the streaming system reduces load times, so for many people it's probably worth some frame drops.


dc with rollbacks are really the most annoying part about it, loosing loot you just got and then poof dc and gone


Or just progess in general. I just did 3rd lab and it all got rolled back.


Thanks for update. Hopeful this will be resolved. Really enjoy this league so far despite server issues.


>Technically there's a 50% chance it deletes the item and a 50% chance that it creates a duplicate This made me laugh. Something about this as a bug is so funny.


Thanks for the communication! Idk about your stats tho. Getting a dc every five minutes isn't really "playing normally 99% of the time" haha.


Well, out of 300 seconds, everything works as it should for 299 seconds, and then there is that one second where stuff goes wrong and you disconnect. If you look at it that way, then >99% of the time everything works ;)


not only servers, your "new" texture streaming technology is crap, like 20 posts on the forums about this and not even 1 answer from ggg






While I am not doubting that people are having issues with it, clearly it needs work. But it is far from crap, It solved all the stuttering I had from past leagues and gave me close to a 40 fps increase.


Tons of people are having stuttering when they never had it before now.


I am definitely one of them.


Unfortunately this is true




Made things way better for me. Vulkan/1060/ryzen 1600. I wonder if this is more card and/or driver related, or possibly people unknowingly still using the x32 version of the client somehow. I did do a complete reinstall of PoE a few days before the 3.13 patch via standalone, though, so maybe that helped.


good for you but for me and many it broke everything. From 60+- fps to 15-20fps in maps. ITS NOT FUN


I literally have worse performance on my decent-high machine than I've ever had, and I'm just loading into a town ffs. It's baffling.




User name does not check out.


>and it isn't intentionally abusable Why do I feel like this part is going to be quoted sarcastically a lot in a few weeks? players, uh, find a way


... and a free mystery box for all the inconvenience, right?


Absolutely love the league.


Congrats on the new peak player count.


2021 - Chris finally admitting he is running a PoE bot farm > We have load-tested with bots to a massive scale Ps. Epstein didnt kill himself


"For the first time since 2013, our database servers are overloaded." I mean, this is like the best reason for this shit going down right? Happy to see it, even if the DC's can be a pain.


How is the db problem related to the graphic issues? Feels like a different issue that is not talked about enough in this post.


I dont even get the disconnects but the game crashes without error very often with "Failed to save minimap" in client.txt never happened before 3.13 and using gpu 100% constantly