Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. my new build

    nice. maybe kati and i can submit a few
  3. Yesterday
  4. my new build

    yeah ive been collecting some sounds and getting ready for another update in the near future!
  5. my new build

    maybe an update to the sound pack? add a few more while we're at it
  6. my new build

    we may need a sound file for that
  7. Collision map?

    what OS you on?
  8. my new build

    you have to hear her say it, it means "little" but sounds like "leeedoe"
  9. my new build

    he's a beach? hmm
  10. Collision map?

    Best practice is to not to subscribe to maps in workshop. If a workshop map gets updated but the server is running older version then you will get collisions.
  11. Collision map?

    Sometimes i cant load maps, not sure what the reason for this is. Ive tried downloading certain things from the workshop as well. I've in some cases have played the very maps that wont let me play any longer. What happens is it loads the server and then just kinda kicks me out before i spawn...how i fix?
  12. my new build

    he lido!
  13. Last week
  14. Newly added maps

    relax biker boy
  15. Newly added maps

    Oh sorry then. Well either way I don’t really think you’re an idiot, was just trying to be funny. Didn’t mean to trigger you. Thanks for the report anyway.
  16. Newly added maps

    i didnt call you an idiot, i said 'fucking guy said it was added'
  17. Newly added maps

    lol chill man, that was in reference to the post where I was bitching about forgetting about doing steam workshop updates on tuesday nights and you called me a fucking idiot. Don't take it personal, I didn't. All I'm saying is I tested that map this morning and it loaded fine. So it was probably a QL/steam crash as designed
  18. Newly added maps

    fuck this
  19. Newly added maps

    Wow that is some scary shit! ermap3 works fine. Its probably because you’re a fucking idiot
  20. Newly added maps

    tried to load ermap3 on chi no limit with a full server and it crashed.. kinda scared to try any others
  21. my new build

    nice build! same end result...losing in QL to old guys.
  22. Subject and date Description Upcoming router maintenance on Sunday (should not cause connectivity loss) Aug 16 2019 01:02:08 AM PT On the morning of Sunday, August 18, between midnight and 4am CDT, we have asked the facility to make physical changes to our two core routers in Chicago as part of our bandwidth upgrades there. These adjustments will require shifting traffic between the routers and may temporarily reduce performance for some clients due to the upstream provider mix being reduced during the maintenance window. While there is always the slim possibility that a mistake will be made and a cable bumped out of place, we have engineered the maintenance steps in such a way that we expect for connectivity to be maintained at all times.
  23. my new build

    nice man!
  24. my new build

    Hell yes! Might run ql lol!
  25. my new build

  26. my new build

    case: https://www.amazon.com/dp/B07BCGCPFH memory: https://www.amazon.com/dp/B07KQP3XQB mouse: https://www.amazon.com/gp/product/B07NSSPV9S mouse pad: https://www.amazon.com/gp/product/B071WZ56G9 keyboard: https://www.amazon.com/gp/product/B06XMRQ68B gpu: https://www.amazon.com/gp/product/B07V265ZBH psu: https://www.amazon.com/gp/product/B01LYGFRL6 ssd: https://www.amazon.com/dp/B07GCL6BR4 cpu: https://www.amazon.com/gp/product/B07HHLX1R8 cpu cooler: https://www.amazon.com/gp/product/B07H1VZ11F mobo: https://www.amazon.com/gp/product/B07HM753YS monitor: https://www.amazon.com/gp/product/B0733VW5QB
  27. Chicago.... Subject and date Description Facility power maintenance on 8/16 and 8/20 will cause downtime Aug 14 2019 12:01:10 AM PT We have been notified by the facilities provider in Chicago (Equinix) that they will be replacing half of their Automatic Static Transfer Switches on Friday, August 16, between 10pm CDT and 6am of the next day, and the rest of the transfer switches on Tuesday, August 20, between 10pm CDT and 6am of the next day. They have told us that these replacements will take down specific power feeds for the entire maintenance window. This will have consequences for customers: - Since most of our network switches are single-fed, we will see a connectivity interruption for those switches that are connected to the impacted power feeds on each day. We have asked the facility to mitigate this by moving the switches to different power feeds after power has been lost, and then back again afterward; this means that nearly all customers in Chicago will see at least two connectivity blips on one of the nights of the maintenance. The connectivity interruptions could last as long as an hour, depending on how quickly the facility moves power cords in our cabinets; we will ask them to prewire the new cords to speed up this process. (Note that our core routers and core aggregation switch have redundant power and should not also go offline.) - Most of our VDS, standalone game server, and standalone voice server machines are single-fed. This means that we will need to power these machines entirely down during the maintenance events. For each night, approximately 30 minutes prior to the start of the 10pm event, we will be running a script that will shut down all specific VDS-hosting machines that our records indicate will lose power that night. Since our records may not be fully accurate, the shutdown operations may not end up being clean for other reasons, or the facility could make mistakes (such as by starting the maintenance early or taking down the wrong circuit), we recommend that VDS customers back up important files before each maintenance window, and also limit heavy disk writing operations around the start of each window (in-flight disk writes can cause file/disk corruption when power is abruptly lost). - Newer customer dedicated machines -- E3-1270v3 and better machines -- all have redundant power supplies that are plugged into two separate power feeds, so they shouldn't go offline. However, most of these machines are designed to throttle CPU performance if one of the two power feeds is lost, and these customers will see higher CPU usage until power is restored on each night. Having switches and servers lose power is a big deal, and having such a long (multi-hour) outage, starting near peak usage hours, makes this event an even bigger deal. The point of the facility having layers of UPS and generator equipment is to provide continuous power and avoid this type of outage. We are making sure to communicate how serious this is to the facility and to make sure that they take all possible steps to do the maintenance right and to help us keep downtime to a minimum. We will update this event as we have more information, such as if the start or duration of the maintenance changes.
  1. Load more activity