본문 바로가기

CLOUD/Openshift

[Openshift 4.11.9] 이중화 구성 HAproxy(Keepalived), DNS(Master, Slave)

반응형

 

이번엔 Openshift 4.11.9 버전으로 LB와 DNS를 이중화하여 구성하였습니다. 실제 프로젝트에선 물리 LB가 있지만 가상머신에 올라가기 때문에 HAproxy로 구성하였습니다. DNS 역시 마찬가지 입니다.

 

저번에 Openshift 4.10.23 버전에서 버전 업그레이만 하여 이중화 하였습니다.

 

 

 

우선 VM 정보입니다.

#1	bootstrap	maru-bootstrap		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.160	-	-
#2	bastion		maru-bastion		rhel 8.6	4 vCPU / 16GB MEM / 900GB	VMNet-172	172.16.2.161	public 192.168.1.167
#3	dns1		maru-dns1		rhel 8.6	4 vCPU / 16GB MEM / 100GB	VMNet-172	172.16.2.211	
#4	dns2		maru-dns2		rhel 8.6	4 vCPU / 16GB MEM / 100GB	VMNet-172	172.16.2.212	
#5	HAproxy1	maru-HAproxy1		rhel 8.6	4 vCPU / 16GB MEM / 100GB	VMNet-172	172.16.2.213	
#6	HAproxy2	maru-HAproxy2		rhel 8.6	4 vCPU / 16GB MEM / 100GB	VMNet-172	172.16.2.214
#7	LB							172.16.2.162
#9	master2		maru-master02		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.164	-	-
#8	master1		maru-master01		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.163	-	-
#10	master3		maru-master03		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.165	-	-
#11	worker1		maru-worker01		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.166	-	-
#12	worker2		maru-worker02		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.167	-	-
#13	infra1		maru-infra01		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.168	-	-
#14	infra2		maru-infra02		rhcos 4.11.9	4 vCPU / 16GB MEM / 50GB	VMNet-172	172.16.2.169	-	-

 

 

1. Named(DNS) 구성

[DNS1]

DNS1 구성 정보입니다.

DNS1을 Master로 설정합니다.

역방향 설정은 하지 않았습니다.

Service failover를 위하여 SRV  레코드를 사용합니다.

 

먼저 패키지 다운로드를 합니다.

yum install -y bind bind-utils

 

rfc1912 파일에 존을 추가하며 Single로 할때와 다른점은 allow-update에 DNS2(Slave)의 IP를 입력합니다.

cat <<EOF >> /etc/named.rfc1912.zones

zone "maru.ocp4.com" IN {
        type master;
        file "maru.ocp4.com.zone";
        allow-update { 172.16.2.212; };
};

zone "pool.ntp.org" IN {
        type master;
        file "/var/named/pool.ntp.org.zone";
        allow-update { 172.16.2.212; } ;
};
EOF

 

DNS 레코드 정보 입니다.

vi /var/named/maru.ocp4.com.zone

$TTL 1D
@   IN SOA  @ ns1.maru.ocp4.com.zone. (
                    20200522   ; serial
                    1D  ; refresh
                    1H  ; retry
                    1W  ; expire
                    3H )    ; minimum

                    IN NS   ns1.maru.ocp4.com.
                    IN NS   ns2.maru.ocp4.com.
                    IN A    172.16.2.211
                    IN A    172.16.2.212

; Bastion or Jumphost
bastion IN A 172.16.2.161
ns1      IN A    172.16.2.211
ns2      IN A    172.16.2.212


; Ancillary services
lb IN A 172.16.2.162

; HAproxy
HAproxy1 IN A 172.16.2.213
HAproxy2 IN A 172.16.2.214

;ocp cluster
bootstrap   IN  A   172.16.2.160
master01 IN  A   172.16.2.163
master02 IN  A   172.16.2.164
master03 IN  A   172.16.2.165

worker01 IN  A   172.16.2.166
worker02 IN  A   172.16.2.167

infra01 IN  A   172.16.2.168
infra02 IN  A   172.16.2.169

;ocp internal cluster ip
etcd-0  IN A    172.16.2.163
etcd-1  IN A    172.16.2.164
etcd-2  IN A    172.16.2.165

api-int         IN A 172.16.2.162
api             IN A 172.16.2.162
*.apps          IN A 172.16.2.162
apps            IN A 172.16.2.162

_etcd-server-ssl._tcp.maru.ocp4.com. IN SRV 0 10 2380 etcd-0.maru.ocp4.com.
_etcd-server-ssl._tcp.maru.ocp4.com. IN SRV 0 10 2380 etcd-1.maru.ocp4.com.
_etcd-server-ssl._tcp.maru.ocp4.com. IN SRV 0 10 2380 etcd-2.maru.ocp4.com.

 

NTP Server 정보 입니다.

/var/named/pool.ntp.org.zone
$TTL 1D
@   IN SOA  @ ns1.pool.ntp.org.zone. (
            4019954001  ; serial
            3H          ; refresh
            1H          ; retry
            1W          ; expiry
            1H )        ; minimum

@           IN NS       ns1.pool.ntp.org.
@           IN NS       ns2.pool.ntp.org.
@           IN A        172.16.2.211
@           IN A        172.16.2.212

ns1          IN A        172.16.2.211
ns2          IN A        172.16.2.212

; ntp
*.rhel      IN A        172.16.2.211
*.rhel      IN A        172.16.2.212

 

 

Zone 설정을 제대로 했는지 확인합니다.

chown root.named /var/named/maru.ocp4.com.zone
chown root.named /var/named/pool.ntp.org.zone
named-checkzone maru.ocp4.com /var/named/maru.ocp4.com.zone
named-checkzone maru.ocp4.com /var/named/pool.ntp.org.zone

 

 

 

 

 

이제 Config file을 수정합니다.

기본 설정에서 53번 포트에 대하여 any 허용하고 dnssec-validation no 설정만 하시면 될겁니다.

 

cat /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
        listen-on port 53 { any; };
        listen-on-v6 port 53 { none; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        secroots-file   "/var/named/data/named.secroots";
        recursing-file  "/var/named/data/named.recursing";
        allow-query     { any; };

        /*
         - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion.
         - If you are building a RECURSIVE (caching) DNS server, you need to enable
           recursion.
         - If your recursive DNS server has a public IP address, you MUST enable access
           control to limit queries to your legitimate users. Failing to do so will
           cause your server to become part of large scale DNS amplification
           attacks. Implementing BCP38 within your network would greatly
           reduce such attack surface
        */
        recursion yes;

        dnssec-enable yes;
        dnssec-validation no;  ## dnssec 검증 설정

        managed-keys-directory "/var/named/dynamic";

        pid-file "/run/named/named.pid";
        session-keyfile "/run/named/session.key";

        /* https://fedoraproject.org/wiki/Changes/CryptoPolicy */
        include "/etc/crypto-policies/back-ends/bind.config";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

 

서비스 재시작 해줍니다.

systemctl restart named
systemctl status named

 

 

 

 

[DNS2]

DNS1의 구성이 끝났으니 이제 DNS2를 구성합니다.

DNS1과 다르게 type에 slave, file에 각 존 앞에 slaves가 붙고 allow-update 대신에 Master의 IP를 적습니다.

 

cat /etc/named.rfc1912.zones
zone "maru.ocp4.com" IN {
        type slave;
        file "slaves/maru.ocp4.com.zone";
        masters { 172.16.2.211; };
};

zone "pool.ntp.org" IN {
        type slave;
        file "/var/named/slaves/pool.ntp.org.zone";
        masters { 172.16.2.211; } ;
};

 

데몬 재시작 하시면 해당 경로로 파일이 생성되었을 겁니다.

ls -al /var/named/slaves/

 

 

 

 

이제 bastion으로 와서 조회를 해봅니다.

 

 

 

 

 

 

2. Keepalived 구성

 

[HAproxy1]

이제 DNS의 설정이 끝났으니 LB를 설정합니다.

이중화를 위하여 Keepalived 패키지를 설치해 줍니다.

 

yum install keepalived -y

 

config file을 설정합니다. priority 값이 Slave보다 높으면 됩니다.

cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.16.2.162
    }
}

 

서비스 시작

systemctl enable keepalived.service --now

 

확인해 봅니다.

설정파일에서 설정했던 interface에 가상 IP가 보여야 합니다.

ip a

 

 

 

[HAproxy2]

 

HAproxy2 역시 동일하게 설정합니다.

다른점은 state 값과 priority 값입니다.

cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       172.16.2.162
    }
}

 

 

서비스 시작합니다.

 

systemctl enable keepalived.service --now

 

가상 IP는 보이지 않으며 HAproxy1에 문제가 있을 경우 fail over하는 형식으로 작동합니다.

 

 

 

 

 

 

3. haproxy 설정

 

[HAproxy1, HAproxy2]

 

haproxy는 Master, Slave 모두 설정이 같습니다.

 

패키지 설치 합니다.

 

yum -y install haproxy

 

구성 설정 파일 입니다.

 

vi /etc/haproxy/haproxy.cfg

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   https://www.haproxy.org/download/1.8/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

    # utilize system-wide crypto-policies
#    ssl-default-bind-ciphers PROFILE=SYSTEM
#    ssl-default-server-ciphers PROFILE=SYSTEM

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 4000


#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend static
    balance     roundrobin
    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------

frontend internal-proxy
        bind 172.16.2.162:8080
        mode http
        default_backend internal-proxy

backend internal-proxy
        # health check
        option httpchk GET /
        http-check expect status 200
        default-server inter 1s fall 3 rise 2
        # load balancing
        balance  source
        mode     http
        server   HAproxy1 172.16.2.213:8080 check
        server   HAproxy2 172.16.2.214:8080 check


frontend openshift-api-server
        bind *:6443
        default_backend openshift-api-server
        mode tcp
        option tcplog

backend openshift-api-server
        balance source
        mode tcp
        server bootstrap 172.16.2.160:6443 check
        server master01 172.16.2.163:6443 check
        server master02 172.16.2.164:6443 check
        server master03 172.16.2.165:6443 check

frontend machine-config-server
        bind *:22623
        default_backend machine-config-server
        mode tcp
        option tcplog

backend machine-config-server
        balance source
        mode tcp
        server bootstrap 172.16.2.160:22623 check
        server master01 172.16.2.163:22623 check
        server master02 172.16.2.164:22623 check
        server master03 172.16.2.165:22623 check

frontend ingress-http
        bind *:80
        default_backend ingress-http
        mode tcp
        option tcplog

backend ingress-http
        balance source
        mode tcp
        server infra01 172.16.2.168:80 check
        server infra02 172.16.2.169:80 check
        server worker01 172.16.2.166:80 check
        server worker02 172.16.2.167:80 check

frontend ingress-https
        bind *:443
        default_backend ingress-https
        mode tcp
        option tcplog

backend ingress-https
        balance source
        mode tcp
        server infra01 172.16.2.168:443 check
        server infra02 172.16.2.169:443 check
        server worker01 172.16.2.166:443 check
        server worker02 172.16.2.167:443 check

 

서비스 시작 및 확인

systemctl enable haproxy.service --now
systemctl status haproxy.service

 

 

 

 

 

 

반응형