EU Gateways – Persistent Rubberbanding + Packet Loss Near Origin (WinMTR if required)
|
Title:
EU Gateways – Persistent Rubberbanding + Packet Loss Near Origin (WinMTR attached) Description: I am experiencing persistent rubberbanding on all EU gateways in Path of Exile 2. The issue has been reproduced under multiple controlled conditions and across two independent internet connections. Symptoms: Severe rubberbanding (server correction resets) Visible in-game latency spikes Occurs in all instances (maps, hideout, town) Not limited to peak hours Environment: ISP: Deutsche Telekom (Germany) Two separate households (direct neighboring properties) Two different routers (FritzBox and Speedport Pro) LAN and WLAN both tested (identical behavior) IPv6 disabled (no change) Bufferbloat test result: Grade A VPN Testing: Mullvad VPN tested (Frankfurt and Amsterdam exits) No improvement All EU gateways tested: Frankfurt, Amsterdam, Milan, London No gateway-specific improvement Other online games: No packet loss No rubberbanding Stable latency Network Diagnostics: Active session endpoint: 173.233.129.236:21360 (PathOfExileSteam.exe) Traceroute: Telekom → GTT (Munich) → Cloudflare → backend → origin Baseline RTT: ~33–40 ms WinMTR during active lag: 0% packet loss up to Cloudflare edge Packet loss begins at 188.42.187.111 3% packet loss continues to final hop (173.233.129.236) Average RTT increases from ~33 ms to ~58–59 ms at destination Worst spikes exceed 1500 ms This indicates packet loss occurring behind the Cloudflare edge, near the origin or hosting segment. Because: Two independent connections reproduce the issue VPN ingress variation does not change behavior Loss begins near final hops and propagates to destination Only PoE2 traffic is affected This appears to be either: Origin/backend packet loss Cloudflare → origin interconnect congestion EU cluster network instability If needed: WinMTR logs captured during active rubberbanding Please investigate packet loss and backend network behavior on EU session nodes corresponding to 173.233.129.236 (or equivalent). I can provide additional logs or packet captures if required. Last bumped on Mar 3, 2026, 2:28:43 PM
|
|
|
Also I just died in a crucial map due to a latency spike. I have been experiencing these issues since launch, and the persistent rubberbanding significantly affects gameplay and progression. If this problem cannot be resolved or acknowledged, I may unfortunately have to stop playing PoE2, despite wanting to continue supporting and playing the game.
|
|
|
Hello,
Paris server is also concerned by this issue. I encountered huge spike latencies on it last evening. A smooth gameplay needs rock solid server. I dream of it :) |
|
|
You can try to set DNS via the 1.1.1.1 or 4.4.4.4 or 8.8.8.8 - this is routing. Your isp send packets to other isp. He is out of control of this when next isp is taking packets. Also yo can check MTR config. Usually 1450 and not 1500 it is. Smaller packets group to hit destination.
Last edited by Biq#2171 on Feb 25, 2026, 6:33:12 AM
| |
|
Thank you for the suggestions regarding DNS and MTU. I have tested both points to rule them out properly.
1) DNS (1.1.1.1 / 8.8.8.8) Changing DNS does not affect an already established game session, as DNS is only used for hostname resolution. Once the client connects to a specific session endpoint (in my case 173.233.129.236:21360), traffic flows directly to that IP. Since the packet loss appears during an active session and is measurable via WinMTR to the resolved endpoint, DNS selection should not influence the observed behavior. For completeness, I can still test alternate DNS providers, but from a technical standpoint this is unlikely to resolve transport-layer packet loss. 2) MTU Testing I performed proper MTU discovery using: ping 173.233.129.236 -f -l 1472 1472 bytes failed due to fragmentation (as expected with PPPoE). 1464 bytes succeeded without fragmentation. This results in: 1464 + 28 bytes (IP/ICMP header) = 1492 MTU An MTU of 1492 is completely normal for Telekom DSL (PPPoE). There is no unusually low MTU (e.g. 1400 or below) that would indicate a path MTU issue. Therefore, MTU configuration appears correct and not abnormal. 3) Remaining Observation WinMTR captured during active rubberbanding shows: 0% packet loss up to Cloudflare edge Packet loss begins at 188.42.187.111 3% packet loss continues to the final destination (173.233.129.236) Worst-case latency spikes >1500 ms Because the loss begins behind the Cloudflare edge and propagates to the final hop, this does not appear to be a local MTU or DNS issue. Given that: Two independent Telekom connections reproduce the issue VPN ingress variation does not change behavior Other games are unaffected MTU is normal (1492) The most plausible remaining cause is packet loss or congestion near the origin/hosting segment rather than a local configuration issue. I appreciate the input and am open to further technical suggestions, but based on the data collected so far, DNS and MTU do not appear to be the root cause. |
|
|
Additional technical findings after further structured testing:
I was able to precisely characterize the behavior during lag spikes, and the pattern strongly suggests server-side simulation stalls rather than a pure routing or client-side issue. Observed behavior during spikes: • FPS remains completely stable. • In-game latency graph spikes abruptly to very high values. • The game world freezes (no entity movement, no combat resolution). • After several seconds, the game resumes and rapidly “catches up”, replaying actions in accelerated succession. This pattern is fully consistent and reproducible. Key characteristics: 1. The spike is sudden, not gradual. 2. There is no gradual packet degradation beforehand. 3. Client performance remains unaffected. 4. The catch-up phase happens instantly once the spike ends. 5. In Endgame, smaller spikes occur periodically (approximately every 30–50 seconds), with occasional longer stalls. Interpretation: This behavior aligns with a server-side tick stall or backend blocking event: • The authoritative simulation appears to pause. • The client waits for server state updates. • Once the server resumes processing, buffered state updates are transmitted. • The client reconciles and fast-forwards simulation. This does not resemble typical transport-layer packet loss, which would usually manifest as jitter, retransmission variance, or irregular packet delay rather than consistent freeze → catch-up behavior. Campaign vs Endgame pattern: During campaign progression: • Lag behavior appeared more instance-dependent. • Creating a new instance occasionally improved behavior. During Endgame: • Spikes are more persistent. • Smaller spikes appear periodically (approx. every 30–50 seconds). • Larger stalls occasionally occur where the simulation completely halts. The periodic nature in Endgame is particularly notable. Network congestion typically does not manifest in regular intervals. However, periodic backend tasks (e.g., state serialization, database persistence, cluster synchronization, or garbage collection) can produce such timing patterns. Additional context from prior diagnostics: • Two independent Telekom connections (neighboring properties) reproduce the issue. • LAN and WLAN both tested. • VPN (Frankfurt and Amsterdam exits) does not alter behavior. • MTU verified at 1492 (normal for PPPoE). • WinMTR shows packet loss beginning near the origin/backend segment, not upstream in Telekom or Cloudflare edge. Taken together, the evidence suggests that the issue is more likely located in: • EU cluster node performance under certain load conditions, • backend I/O or persistence interactions, • or Cloudflare → origin interconnect / origin processing layer. Given the confirmed freeze → catch-up behavior with stable FPS and abrupt latency spikes, this appears consistent with server-side simulation stalls rather than client configuration or local routing. I am willing to provide additional logs, time-stamped reproduction windows, or packet captures if that would assist further investigation. Thank you for taking a closer look at this |
|
|
After further controlled testing, I observed a strong correlation between spike intensity and the size of my character’s inventory/state.
Test performed: I drastically reduced my (global) inventory, as i didn't plan to play any longer this legue with these issues (gifted it to a friend). I Remained in the same zone. Observed latency behavior over several minutes. Result: The periodic spikes (approx. every 30–50 seconds) still occur. The interval between spikes remains unchanged. However, the spike amplitude is significantly reduced. Rubberbanding is barely noticeable compared to before. Previously, with a full inventory and large accumulated character state (items, modifiers, etc.), the spikes were clearly visible and caused noticeable rubberbanding. Interpretation: The constant interval but reduced spike intensity strongly suggests that: A periodic backend task is running on a fixed timer. The execution time of this task scales with the size/complexity of the character state. Reduced state size results in shorter blocking time. This behavior is inconsistent with: ISP-level routing issues MTU configuration Access-layer bufferbloat Network-layer problems would not change in intensity simply due to reducing inventory size. Instead, this pattern is consistent with periodic: Character state serialization Persistence / database commit Inventory snapshot Session checkpointing Backend synchronization The key point is that the spike interval remains constant, while the spike duration scales with character data size. This appears to indicate a server-side periodic operation whose execution time increases with state complexity. I can reproduce this behavior reliably. |
|
|
@Hurkenschuß#3111
You seem like the kind of person that is missing in the GGG roster. Your research into this is bordering on science, great job. I was wondering if having all the old poe1 characters from previous leagues (maxed default number of char slots) with a lot of random stuff still in their inventories is what could be causing the severe lag in poe1 when trying to ctrl-click items from inventory to stash? Could the hundreds of remove only stash tabs also contribute to lag problems? While I can easily clean up the character inventories and test this tomorrow, cleaning up old stash tabs would take a full workday minimum. |
|
|
Dejavu to D4. If you see another player their whole inventory from stash is loaded :D
Kiddings aside, if it is indeed stash tab related they gone fix it soon, or do they want ppl to buy less stash tabs for smoother experience? IGN: “_THE__ROCK_”: level 92 melee ranger >>>> Tougher than any Marauder, no silly walk or spinal injury and has pants <<<< Other currently retired toons level 83-85 fitting the description above… Last edited by Conan_xxx#1131 on Mar 3, 2026, 1:49:59 PM
|
|
" I mean, I'm sure you're every bit as capable of talking it over with ChatGPT for 5 minutes as anyone else is. Indeed - based on what I've seen in this thread, you're probably significantly more capable... Most threads do not begin with a "title" in the body of the post - much less one written by software that doesn't understand its human operator doesn't actually plan on providing a WinMTR log with the post; ![]() People have a tremendous habit of sorely understimating just how much information a qualified technician will extract from diagnostic data. Like the entire point of a WinMTR log is that it shows the route, but... the OP has just provided some loose anecdotes about what the log apparently says instead. For example: quite often packet loss beginning at one hop is actually caused by the hop immediately prior to it. But this LLM-guessing-at-the-problem reading of a WinMTR makes it impossible to know the previous IP address, much less which company it has been assigned to or its geographic location. In other cases, packet loss will be caused by a peering dispute between two specific companies. But once again... there's absolutely no way for GGG or anyone else to potentially investigate that, because ChatGPT has never sat a CCNA exam & the OP hasn't provided a single iota of data. Instead they've wanted people to trust The combination of at least a half-dozen very clear indications an LLM has contributed very heavily to the writing of the post and the lack of even a single log file or other data means the entire thread is a waste of time - there is a reason GGG has ignored it thus far. I mean - what is literally anyone supposed to do with this nonsense; " I am a Network Engineer, and... and I don't know what this means. It's just words and arrows. Traffic goes to "backend" ?? And... then... "origin" ?? Oh okay. That clarifies everything! Having spent thousands of euros on Path of Exile over the years, I will not be further supporting it financially until and unless GGG resumes offering Technical Support.
|
|































