-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GlobalConf is empty on first install #2536
Comments
Hi @cruznick! The logs look normal for a new Security Server deployment. The errors related to the missing instance identifier and anchor file are expected since the Security Server hasn't been initialised yet. However, since you're using volumes, the configuration data should be persisted and it shouldn't be lost when recreating the containers. Did you keep the previously created volumes when recreating the stack? The following configuration directories should be persisted to volumes:
Also, do you use a local or remote database? If you use a local database, the db's data directory ( |
HI, both are persisted as volumes on an efs and the db is an rds aurora pg 16 and its config is loaded as env vars the problem is that I'm not being able to see the admin UI to initialize the setup so the Gov instance can see my server, |
Here is the code snippet that bounds the volumes: import * as k8s from '@pulumi/kubernetes';
import * as configMap from './configmap';
import * as secret from './secret';
import { productName, kubernetesStack, config } from '../base';
import { serviceAccount } from './serviceAccount';
import { pvcLib, pvcEtc } from './persistentVolumeClaim';
new k8s.apps.v1.Deployment(
productName,
{
metadata: {
name: productName,
namespace: productName,
labels: {
app: productName,
type: `app`,
},
},
spec: {
replicas: 1,
selector: {
matchLabels: {
app: productName,
type: `app`,
},
},
template: {
metadata: {
labels: {
app: productName,
type: `app`,
},
},
spec: {
serviceAccountName: serviceAccount.metadata.name,
securityContext: {
fsGroup: 1000,
},
volumes: [
{
name: 'xroad-persistent-storage',
persistentVolumeClaim: {
claimName: pvcLib.metadata.name,
},
},
{
name: 'xroad-etc-storage',
persistentVolumeClaim: {
claimName: pvcEtc.metadata.name,
},
},
{
name: 'xroad-config',
configMap: {
name: configMap.gcbaEnv.metadata.name,
},
},
// {
// name: 'xroad-properties',
// configMap: {
// name: configMap.xroadConfig.metadata.name,
// },
// },
// {
// name: 'xroad-db-properties',
// configMap: {
// name: configMap.xroadDbProperties.metadata.name,
// },
// },
{
name: 'xroad-secrets',
secret: {
secretName: secret.gcbaSecret.metadata.name,
},
},
{
name: 'ssh-key-secret',
secret: {
secretName: secret.sshKeySecret.metadata.name,
items: [
{
key: 'SSH_PUBLIC_KEY',
path: 'id_rsa.pub',
mode: 0o644, // Ensure read permissions
},
],
},
},
],
containers: [
{
name: 'app',
imagePullPolicy: 'Always',
image: `niis/xroad-security-server-sidecar:${config.require('imageTag')}`,
volumeMounts: [
{
name: 'xroad-persistent-storage',
mountPath: '/var/lib/xroad',
},
{
name: 'xroad-config',
mountPath: '/etc/xroad/config',
},
{
name: 'xroad-secrets',
mountPath: '/etc/xroad/secrets',
},
{
name: 'ssh-key-secret',
mountPath: '/etc/.ssh/',
},
],
envFrom: [
{ configMapRef: { name: configMap.gcbaEnv.metadata.name } },
{ secretRef: { name: secret.gcbaSecret.metadata.name } },
],
startupProbe: {
httpGet: {
path: '/',
port: 8080,
},
periodSeconds: 10,
failureThreshold: 60,
initialDelaySeconds: 20,
},
livenessProbe: {
httpGet: {
path: '/',
port: 8080,
},
periodSeconds: 10,
successThreshold: 1,
failureThreshold: 5,
},
readinessProbe: {
httpGet: {
path: '/',
port: 8080,
},
periodSeconds: 10,
timeoutSeconds: 6,
failureThreshold: 1,
},
ports: [
{ containerPort: 8443 },
{ containerPort: 4000 },
{ containerPort: 5588 },
{ containerPort: 22 },
],
securityContext: {
runAsNonRoot: false,
runAsUser: 0,
privileged: true,
capabilities: {
drop: ['ALL'],
add: ['NET_BIND_SERVICE', 'SYS_ADMIN'],
},
},
resources: {
requests: {
cpu: '2', // 2 CPU cores
memory: '3Gi', // 3 GiB of memory
},
limits: {
cpu: '2', // 2 CPU cores
memory: '3Gi', // 3 GiB of memory
},
},
},
],
},
},
},
},
{
provider: kubernetesStack.provider,
},
); The commented part is that I was attempting to write directrly Rigth know im trying to directly bound all ports but the issue is mostly the same: service: import { productName, kubernetesStack } from '../base';
import * as k8s from '@pulumi/kubernetes';
export const service = new k8s.core.v1.Service(
productName,
{
metadata: {
name: productName,
namespace: productName,
},
spec: {
selector: {
app: productName,
},
ports: [
{
protocol: 'TCP',
port: 8443,
targetPort: 8443,
name: 'consumer-info', // Main service
},
{
protocol: 'TCP',
port: 4000,
targetPort: 4000,
name: 'admin', // Admin interface
},
{
protocol: 'TCP',
port: 5588,
targetPort: 5588,
name: 'healthcheck', // Health check
},
],
},
},
{
provider: kubernetesStack.provider,
},
); Ingress: import * as k8s from '@pulumi/kubernetes';
import * as pulumi from '@pulumi/pulumi';
import * as service from './service';
import { productName, kubernetesStack, hostedZoneStack } from '../base';
export const serviceUrl = pulumi.interpolate`${productName}.${hostedZoneStack.default.name}`;
new k8s.networking.v1.Ingress(
productName,
{
metadata: {
name: productName,
namespace: productName,
annotations: {
'nginx.ingress.kubernetes.io/configuration-snippet': 'set $auth_mode "not-required";',
'nginx.ingress.kubernetes.io/proxy-read-timeout': '600',
'nginx.ingress.kubernetes.io/proxy-send-timeout': '600',
'nginx.ingress.kubernetes.io/backend-protocol': 'https',
},
},
spec: {
ingressClassName: 'nginx-internal',
tls: [
{
hosts: [serviceUrl],
},
],
rules: [
{
host: serviceUrl,
http: {
paths: [
{
path: '/',
pathType: 'Prefix',
backend: {
service: {
name: service.service.metadata.name,
port: {
number: 8443, // Main service
},
},
},
},
{
path: '/admin',
pathType: 'Prefix',
backend: {
service: {
name: service.service.metadata.name,
port: {
number: 4000, // Admin interface
},
},
},
}
],
},
},
],
},
},
{
provider: kubernetesStack.provider,
},
); But the results are mostly the same :
|
Based on the error message, the URL In the above configuration that you shared, the path |
Hi sorry to bother you with this but I'm not able to make the sidecar work (again) in EKS:
I need to configure a Secure Server to consume a Gov service and around August last year I made it work with these configurations:
Working env (dev)
Deployment
Service
Ingress
That was working until today but it looks like the gov certs expired, I use Pulumi to manage the infrastructure but when trying that config to create the production deployment I get the following errors:
Instance Identifier error
Missing Anchor file
Full Log
SOAP
And the config is the same:
Non Working (prd)
Deployment
Service
Ingress
The text was updated successfully, but these errors were encountered: