EDB Postgres for Kubernetes 1.13.0 release notes v1

This release of EDB Postgres for Kubernetes includes the following:

TypeDescription
FeatureSupport for Snappy compression. Snappy is a fast compression option for backups that increase the speed of uploads to the object store using a lower compression ratio.
FeatureSupport for tagging files uploaded to the Barman object store. This feature requires Barman 2.18 in the operand image. of backups after Cluster deletion.
FeatureExtension of the status of a Cluster with status.conditions. The condition ContinuousArchiving indicates that the Cluster has started to archive WAL files.
FeatureImprove the status command of the cnp plugin for kubectl with additional information: add a Cluster Summary section showing the status of the Cluster and a Certificates Status section including the status of the certificates used in the Cluster along with the time left to expire.
FeatureSupport the new barman-cloud-check-wal-archive command to detect a non-empty backup destination when creating a new cluster.
FeatureAdd support for using a Secret to add default monitoring queries through MONITORING_QUERIES_SECRET configuration variable.
FeatureAllow the user to restrict container’s permissions using AppArmor (on Kubernetes clusters deployed with AppArmor support).
FeatureAdd Windows platform support to cnp plugin for kubectl, now the plugin is available on Windows x86 and ARM.
FeatureDrop support for Kubernetes 1.18 and deprecated API versions
Container imagesPostgreSQL containers include Barman 2.18.
Security fixAdd coherence check of username field inside owner and superuser secrets; previously, a malicious user could have used the secrets to change the password of any PostgreSQL user.
Bug fixFix a memory leak in code fetching status from Postgres pods.
Bug fixDisable PostgreSQL self-restart after a crash. The instance controller handles the lifecycle of the PostgreSQL instance.
Bug fixPrevent modification of spec.postgresUID and spec.postgresGID fields in validation webhook. Changing these fields after Cluster creation makes PostgreSQL unable to start.
Bug fixReduce the log verbosity from the backup and WAL archiving handling code.
Bug fixCorrect a bug resulting in a Cluster being marked as Healthy when not initialized yet.
Bug fixAllows standby servers in clusters with a very high WAL production rate to switch to streaming once they are aligned.
Bug fixFix a race condition during the startup of a PostgreSQL pod that could seldom lead to a crash.
Bug fixFix a race condition that could lead to a failure initializing the first PVC in a Cluster.
Bug fixRemove an extra restart of a just demoted primary Pod before joining the Cluster as a replica.
Bug fixCorrectly handle replication-sensitive PostgreSQL configuration parameters when recovering from a backup.
Bug fixFix missing validation of PostgreSQL configurations during Cluster creation.