- TCP Probes support You're now able to configure startup and liveness health probes using the TCP protocol.
- Previously, some nodes on the network were unable to run containers correctly due to a 'too many open files' error. This could cause workloads to crash on the node after long periods of uptime. This has been fixed.
- Previously, when specifying a startup command from the API with multiple arguments, some of the arguments were not being passed correctly. This has now been fixed.
- Startup and Liveness probes were previously being incorrectly reported in the new workload errors UI. Now, they are being correctly reported as a StartupProbeFailure and LivenessProbeFailure.
- Streaming logs are now working much more reliably.
- Previously, logs for containers that exited abruptly were not being appropriately captured and sent to streaming logs or external logging systems. This has been fixed, and the patch is being deployed throughout the network.
- If a startup probe is defined, traffic will not be routed to it through the load balancer until the startup probe passes. Previously, traffic could be routed before the startup probe succeeded, resulting in
- We've updated the default Host value for external New Relic logging to correctly point to the most commonly-used New Relic host server.
- We have experienced been brief outages of the load balancers that provide inbound networking. This resulted in some customers being unable to access running containers. We have put automated monitoring and resolution in place to allow the system to gracefully restore connections, and are diagnosing the root cause.