Jump to content
  • Sign Up

Looking for a way to avoid instances with stuck events


Recommended Posts

tl;dr the single-most important way to avoid the issue is to be patient. Every event will, eventually, do your bidding.

There are, fortunately, a number of techniques to reduce the amount of patience we need to have.

 

****

* After a patch, all instances are reset. You can see when the player-predicted next patch will be [here](https://www.thatshaman.com/tools/countdown/) (Tuesday is confirmed by ANet)

* Certain major events cause instances to roll over, so sometimes just waiting a day. Typically, this applies to any zone with a world boss or a map-wide meta. Most such maps can be seen on the [wiki's event timer](https://wiki.guildwars2.com/wiki/Event_timers).

* Certain currently-popular events will do the same, e.g. any map with a four-event daily or something needed for a new achievement or collection. This month, that includes the Orrian maps (for the service medal). Again, you can find such maps by waiting a day.

* Many maps are populated enough to generate a second map at reset. Sometimes the newer map disappears quickly, so 3 hours after reset, it will be gone.

* Guesting to a labeled-as-full world or joining a party with a member of such a world and then entering the map can get you into a different instance, **if** there are multiples. (In the interests of brevity, I'll skip the reason that works.)

* During the weekend, some maps are heavily populated due to guild missions. Those are little less predictable, so just check on Saturday or Sunday during peak hours for your region.

* And if none of that works, it is possible to force a new map. You need at least 50, possibly 100 people to help. Get everyone _except you_ into one squad (or two squads with more than 50). Everyone moves to the same zone when prompted. After everyone confirms they are in, then while representing a different guild from the majority, you try. At some point in the process, the map will "soft cap" and you'll be dropped into a new instance. (Obviously impractical for the vast majority of people.)

 

****

Generally speaking, there are far fewer stuck events or stale instances than is generally believed. That's because the game has a variety of incentives for people to overpopulate certain zones at regular intervals, which usually causes them to create new instances. It's not a perfect system by any means, so some maps take a while to refresh.

 

Generally, there aren't that many events that actually stall for very long. All too often, people don't realize that there's a prerequisite event or that there's something that we can do as players to get things moving again.

 

However, it's always the case that if we "need" the event for some reason (collection, achievement, whatever), any delay is frustrating. And it's reasonable for us not to give a skritt as to why the event isn't available to us _right now_. Still, I find that it's sufficient to simply not being in a rush, and being willing to come back another time or day.

 

I've done a bunch of older collections in the last few months (filling time during the content drought), including a number of events that ... other people were claiming were bugged/stalled, even as I was doing them.

 

 

Link to comment
Share on other sites

Each event should have a process that senses when something is wrong and then restart the event from the beginning. There should be no need to to have to wait until daily reset or wait for the next patch update. I don't know how the instance process works but when there is a defective event, players entering the map should be directed to a working instance. Eventually, when the low population process kicks everyone to a different instance, the faulty instance can be taken offline and restarted from scratch. Maybe this is already done, just not as fast as I would like.

Link to comment
Share on other sites

> @"Carnius Magius.8091" said:

> Each event should have a process that senses when something is wrong and then restart the event from the beginning. There should be no need to to have to wait until daily reset or wait for the next patch update. I don't know how the instance process works but when there is a defective event, players entering the map should be directed to a working instance. Eventually, when the low population process kicks everyone to a different instance, the faulty instance can be taken offline and restarted from scratch. Maybe this is already done, just not as fast as I would like.

 

Thats all find and dandy..... except it would make things worse with the way the older events are scripted. Core Tyria events run on a lot of assumptions, and the whole idea of resetting an event itself because of an unknown state is a huge assumption. Mob AI has to be instructed on multiple things at each step, and you'd essentially have to force the entire event script to run each step forward or backward on false triggers/states and figure out how to hide the results. If they don't, the event won't clean up properly.

 

From HOT onward, state tracking is built a lot better, and most wave spawn processes made more robust, by pre-spawning them so the server had enough time to ensure states are fully set up before they were visible. I'm also pretty sure the ones linked to triggers have heart beats now, so if the AI gets stuck or spawns below the map, it has a chance to detect it and move forward.

 

To do this with Core Tyria, they'd have to go back and rebuild all the scripts (possibly from the ground up); And they've avoiding that like the plague since scripts from that era are insanely fickle. (Historically, the only thing they've done to event scripts is insert pointers, add phases or modify the mobs.... but they can't change anything with the event phase themselves) From a Project management stand point, you'd have to make the assumption that it would involve recreating them from scratch, and that hope its either not as big a hassle as you thought, or that some of the script is salvageable. But if they assume its salvageable, and it turns out its not, then the entire schedule for it is blown.

 

 

The big advantage of a map shutdown, or new spawn, is that it wipes the game states whole sale. Thats part of the reason WoW servers are restarted every week during maintenance..... its to flush out all the state errors, memory artifacts, and other stuff that slips past garbage collection before they result in an uncontrolled crash. MMOs have this major problem where the demand for robustness of garbage collection and state clean up is competitive with software driven safety systems..... but can't afford to develop it as such, due to the insanely high cost (man power, design, $$, time). I'm sure everyone would love if every MMO had 100% uptime, and could patch itself in the middle of run time without stopping anything..... and there are entire programming languages dedicated to being able to do this. But no one would be willing to pay 10 times the cost for their game to have this..... most people won't even pay more then $50 for a new game given the influence of Steam sales.

 

 

Now lets just assume everything I've said so far doesn't matter....... how would you design a watch dog process to monitor for state problem? What rules would you use, and how would check for it? And how do you make this efficient enough that you're not just making an entire bespoke watchdog script tailored to each specific event phase? What are the risks of the watchdogs breaking when something on the server backend changes?

Link to comment
Share on other sites

> @"starlinvf.1358" said:

> > @"Carnius Magius.8091" said:

> > Each event should have a process that senses when something is wrong and then restart the event from the beginning. There should be no need to to have to wait until daily reset or wait for the next patch update. I don't know how the instance process works but when there is a defective event, players entering the map should be directed to a working instance. Eventually, when the low population process kicks everyone to a different instance, the faulty instance can be taken offline and restarted from scratch. Maybe this is already done, just not as fast as I would like.

>

> Thats all find and dandy..... except it would make things worse with the way the older events are scripted. Core Tyria events run on a lot of assumptions, and the whole idea of resetting an event itself because of an unknown state is a huge assumption. Mob AI has to be instructed on multiple things at each step, and you'd essentially have to force the entire event script to run each step forward or backward on false triggers/states and figure out how to hide the results. If they don't, the event won't clean up properly.

>

> From HOT onward, state tracking is built a lot better, and most wave spawn processes made more robust, by pre-spawning them so the server had enough time to ensure states are fully set up before they were visible. I'm also pretty sure the ones linked to triggers have heart beats now, so if the AI gets stuck or spawns below the map, it has a chance to detect it and move forward.

>

> To do this with Core Tyria, they'd have to go back and rebuild all the scripts (possibly from the ground up); And they've avoiding that like the plague since scripts from that era are insanely fickle. (Historically, the only thing they've done to event scripts is insert pointers, add phases or modify the mobs.... but they can't change anything with the event phase themselves) From a Project management stand point, you'd have to make the assumption that it would involve recreating them from scratch, and that hope its either not as big a hassle as you thought, or that some of the script is salvageable. But if they assume its salvageable, and it turns out its not, then the entire schedule for it is blown.

>

>

> The big advantage of a map shutdown, or new spawn, is that it wipes the game states whole sale. Thats part of the reason WoW servers are restarted every week during maintenance..... its to flush out all the state errors, memory artifacts, and other stuff that slips past garbage collection before they result in an uncontrolled crash. MMOs have this major problem where the demand for robustness of garbage collection and state clean up is competitive with software driven safety systems..... but can't afford to develop it as such, due to the insanely high cost (man power, design, $$, time). I'm sure everyone would love if every MMO had 100% uptime, and could patch itself in the middle of run time without stopping anything..... and there are entire programming languages dedicated to being able to do this. But no one would be willing to pay 10 times the cost for their game to have this..... most people won't even pay more then $50 for a new game given the influence of Steam sales.

>

>

> Now lets just assume everything I've said so far doesn't matter....... how would you design a watch dog process to monitor for state problem? What rules would you use, and how would check for it? And how do you make this efficient enough that you're not just making an entire bespoke watchdog script tailored to each specific event phase? What are the risks of the watchdogs breaking when something on the server backend changes?

 

A simple timeout would work in most cases. If the event hasn't progressed or completed in 30 minutes (or an hour or four hours, whatever) then reset it.

 

Assuming they built in a way to reset or abort an event.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...