HQ Changelog
HQ configuration file TOML -> YAML
In the past, HQ has been using TOML file format. As we want to reduce differences in file formats between Agent and HQ as much as possible, this was the first step.
Postgres changes
Postgres connection URI is not being built within config.yaml but in backend runtime;
parameter group has changed from postgres to storage.postgres.*
In the previous version, schema was defined as a part of extraParamSpecs . In the new version schema is now defined as a separate property storage.postgres.database.schema;
Property extraParamSpecs is renamed to params;
Parameter group api changes
Parameter group api has been renamed to http and following parameters are not part of it anymore:
New parameter auth
Property auth is being derived from property api (now. http ).
Parameters that has been moved from http to auth are following:
Aurora support
HQ has been tested against Aurora (Postgres) and is compatible.
Pod restart on configmap checksum change
In case of any changes in ConfigMap and after executing helm upgrade HQ pod will be automatically restarted as well therefore no need for manual interventions.
TL; DR Changes in values.yaml
Now (values.yaml) Before (values.yaml)
Copy lensesHq:
auth:
administrators:
- admin@example.com
- admin2@example.com
saml:
baseURL: "https://"
entityID: "https://"
# -- Example: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
metadata:
referenceFromSecret: false
secretName: hq-tls-mock-saml-metadata
secretKeyName: metadata.xml
stringData: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://saml.example.com/entityid" validUntil="2034-09-23T08:10:34.764Z">
<md:IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC4jCCAcoCCQC33wnybT5QZDANBgkqhkiG9w0BAQsFADAyMQswCQYDVQQGEwJV
...
m0eo2USlSRTVl7QHRTuiuSThHpLKQQ==</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://mocksaml.com/api/saml/sso"/>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://mocksaml.com/api/saml/sso"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
userCreationMode: "sso"
usersGroupMembershipManagementMode: "manual"
tls:
enabled: false
cert:
referenceFromSecret: false
stringData: |
-----BEGIN CERTIFICATE-----
MIIECDCCAvCgAwIBAgIRAIiph6RQMbGCABOaJO94PbowDQYJKoZIhvcNAQELBQAw
...
-----END CERTIFICATE-----
privateKey:
secret:
name: hq-agent-test-authority
key: hq-tls-test.key.pem
agents:
address: ":10000"
tls:
enabled: false
verboseLogs: false
cert:
referenceFromSecret: false
secretName: hq-agent-test-authority
secretKeyName: hq-tls-test.crt.pem
stringData: |
-----BEGIN CERTIFICATE-----
MIIECDCCAvCgAwIBAgIRAIiph6RQMbGCABOaJO94PbowDQYJKoZIhvcNAQELBQAw
gYUxFTATBgNVBAYTDEFua2gtTW9ycG9yazEaMBgGA1UEChMRVW5zZWVuIFVuaXZl
.....
-----END CERTIFICATE-----
privateKey:
secret:
name: hq-agent-test-authority
key: hq-tls-test.key.pem
http:
address: ":8080"
accessControlAllowOrigin:
- https://
- http://
storage:
postgres:
#host: postgres-postgresql.postgres.svc.cluster.local
host:
port: 5432
#username: lenses
username: postgres
database: hq
passwordSecret:
type: "externalSecret"
name: postgres-aurora
key: password
externalSecret:
secretStoreRef:
clusterSecretStore:
name: enjoy3-secrets
logger:
mode: "text"
level: "debug"
# Additional env variables appended to deployment
# Follows the format of [EnvVar spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#envvar-v1-core)
additionalEnv:
# - name: FOO
# value: bar
# Disable livenessProbe, used while debugging
livenessProbe:
enabled: false
# Pause execution of Lenses start up script to allow the user to login into the container and
# check the running environment, used while debugging
pauseExec:
enabled: false
monitoring:
port: 9090
Copy lensesHq:
agents:
address: ":10000"
tls:
enabled: false
api:
address: ":8080"
accessControlAllowOrigin: '["http://localhost:8080"]'
administrators: '["admin@example.com"]'
saml:
baseUrl: "http://localhost:8080"
entityId: "http://localhost:8080"
# -- Example: <?xml version="1.0" ... (big blob of xml) </md:EntityDescriptor>
metadata:
referenceFromSecret: false
stringData: |
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://saml.example.com/entityid" validUntil="2034-09-23T08:10:34.764Z">
<md:IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC4jCCAcoCCQC33wnybT5QZDANBgkqhkiG9w0BAQsFADAyMQswCQYDVQQGEwJV
SzEPMA0GA1UECgwGQm94eUhRMRIwEAYDVQQDDAlNb2NrIFNBTUwwIBcNMjIwMjI4
MjE0NjM4WhgPMzAyMTA3MDEyMTQ2MzhaMDIxCzAJBgNVBAYTAlVLMQ8wDQYDVQQK
DAZCb3h5SFExEjAQBgNVBAMMCU1vY2sgU0FNTDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBALGfYettMsct1T6tVUwTudNJH5Pnb9GGnkXi9Zw/e6x45DD0
RuRONbFlJ2T4RjAE/uG+AjXxXQ8o2SZfb9+GgmCHuTJFNgHoZ1nFVXCmb/Hg8Hpd
4vOAGXndixaReOiq3EH5XvpMjMkJ3+8+9VYMzMZOjkgQtAqO36eAFFfNKX7dTj3V
pwLkvz6/KFCq8OAwY+AUi4eZm5J57D31GzjHwfjH9WTeX0MyndmnNB1qV75qQR3b
2/W5sGHRv+9AarggJkF+ptUkXoLtVA51wcfYm6hILptpde5FQC8RWY1YrswBWAEZ
NfyrR4JeSweElNHg4NVOs4TwGjOPwWGqzTfgTlECAwEAATANBgkqhkiG9w0BAQsF
AAOCAQEAAYRlYflSXAWoZpFfwNiCQVE5d9zZ0DPzNdWhAybXcTyMf0z5mDf6FWBW
5Gyoi9u3EMEDnzLcJNkwJAAc39Apa4I2/tml+Jy29dk8bTyX6m93ngmCgdLh5Za4
khuU3AM3L63g7VexCuO7kwkjh/+LqdcIXsVGO6XDfu2QOs1Xpe9zIzLpwm/RNYeX
UjbSj5ce/jekpAw7qyVVL4xOyh8AtUW1ek3wIw1MJvEgEPt0d16oshWJpoS1OT8L
r/22SvYEo3EmSGdTVGgk3x3s+A0qWAqTcyjr7Q4s/GKYRFfomGwz0TZ4Iw1ZN99M
m0eo2USlSRTVl7QHRTuiuSThHpLKQQ==</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://mocksaml.com/api/saml/sso"/>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://mocksaml.com/api/saml/sso"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
userCreationMode: "sso"
usersGroupMembershipManagementMode: "sso"
tls:
enabled: false
# Find more details in https://docs.lenses.io/current/installation/kubernetes/helm/#helm-storage
## Postgres template example: "postgres://[username]:[pwd]@[host]:[port]/[database]?sslmode=require"
postgres:
host: postgres-postgresql.postgres.svc.cluster.local
port: 5432
username: lenses
database: hq
#extraParamSpecs: "?sslmode=disable"
passwordSecret:
type: "externalSecret"
name: postgres
key: password
externalSecret:
secretStoreRef:
clusterSecretStore:
name: [CLUSTER_SECRET_STORE_NAME]
logger:
mode: "text"
level: "debug"
# Additional env variables appended to deployment
# Follows the format of [EnvVar spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#envvar-v1-core)
additionalEnv:
# - name: FOO
# value: bar
# Disable livenessProbe, used while debugging
livenessProbe:
enabled: true
monitoring:
port: 9090
Agent Changelog
Agent Key reference change
Previously environment variable known as LENSES_HQ_AGENT_KEY that was referenced in provisioning.yaml and stores the agentKey value has been renamed to LENSESHQ_AGENT_KEY.
Known issues
Since newest version of Lenses HQ and Agent bring breaking changes following issues can happen.
Database migration
Upon doing helm upgrade HQ can fail with following error log:
Copy Fatal: Init db error" err="migrate database: database has been migrated beyond what we understand: the current is 17; the target is 2" version=v6.0.0-alpha.14
In order to fix it, following command has to be run on the postgres database:
Copy DELETE FROM goose_db_version WHERE id > 2;
In case SQL command cannot be run, database has to be cleared as if one is starting from scratch.
Last updated 5 months ago