Clustered Samba Configuration

As of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On provides support for running Clustered Samba in an active/active configuration. This requires that you install and configure CTDB on all nodes in a cluster, which you use in conjunction with GFS2 clustered file systems.

Note

Red Hat Enterprise Linux 6 supports a maximum of four nodes running clustered Samba.
This chapter describes the procedure for configuring CTDB by configuring an example system. For information on configuring GFS2 file systems, refer to Global File System 2. For information on configuring logical volumes, refer to Logical Volume Manager Administration.

Note

Simultaneous access to the data in the Samba share from outside of Samba is not supported.

11.1. CTDB Overview

CTDB is a cluster implementation of the TDB database used by Samba. To use CTDB, a clustered file system must be available and shared on all nodes in the cluster. CTDB provides clustered features on top of this clustered file system. As of the Red Hat Enterprise Linux 6.2 release, CTDB also runs a cluster stack in parallel to the one provided by Red Hat Enterprise Linux clustering. CTDB manages node membership, recovery/failover, IP relocation and Samba services.

11.2. Required Packages

In addition to the standard packages required to run the Red Hat High Availability Add-On and the Red Hat Resilient Storage Add-On, running Samba with Red Hat Enterprise Linux clustering requires the following packages:
  • ctdb
  • samba
  • samba-common
  • samba-winbind-clients

11.3. GFS2 Configuration

Configuring Samba with the Red Hat Enterprise Linux clustering requires two GFS2 file systems: One small file system for CTDB, and a second file system for the Samba share. This example shows how to create the two GFS2 file systems.
Before creating the GFS2 file systems, first create an LVM logical volume for each of the file systems. For information on creating LVM logical volumes, refer to Logical Volume Manager Administration. This example uses the following logical volumes:
  • /dev/csmb_vg/csmb_lv, which will hold the user data that will be exported via a Samba share and should be sized accordingly. This example creates a logical volume that is 100GB in size.
  • /dev/csmb_vg/ctdb_lv, which will store the shared CTDB state information and needs to be 1GB in size.
You create clustered volume groups and logical volumes on one node of the cluster only.
To create a GFS2 file system on a logical volume, run the mkfs.gfs2 command. You run this command on one cluster node only.
To create the file system to host the Samba share on the logical volume /dev/csmb_vg/csmb_lv, execute the following command:
[root@clusmb-01 ~]# mkfs.gfs2 -j3 -p lock_dlm -t csmb:gfs2 /dev/csmb_vg/csmb_lv
The meaning of the parameters is as follows:
-j
Specifies the number of journals to create in the filesystem. This example uses a cluster with three nodes, so we create one journal per node.
-p
Specifies the locking protocol. lock_dlm is the locking protocol GFS2 uses for inter-node communication.
-t
Specifies the lock table name and is of the format cluster_name:fs_name. In this example, the cluster name as specified in the cluster.conf file is csmb, and we use gfs2 as the name for the file system.
The output of this command appears as follows:
This will destroy any data on /dev/csmb_vg/csmb_lv.
  It appears to contain a gfs2 filesystem.

Are you sure you want to proceed? [y/n] y

Device:
/dev/csmb_vg/csmb_lv
Blocksize:      4096
Device Size     100.00 GB (26214400 blocks)
Filesystem Size:    100.00 GB (26214398 blocks)
Journals:       3
Resource Groups:    400
Locking Protocol:   "lock_dlm"
Lock Table:         "csmb:gfs2"
UUID:
  94297529-ABG3-7285-4B19-182F4F2DF2D7
In this example, the /dev/csmb_vg/csmb_lv file system will be mounted at /mnt/gfs2 on all nodes. This mount point must match the value that you specify as the location of the sharedirectory with the path = option in the /etc/samba/smb.conf file, as described inSection 11.5, “Samba Configuration”.
To create the file system to host the CTDB state information on the logical volume /dev/csmb_vg/ctdb_lv, execute the following command:
[root@clusmb-01 ~]# mkfs.gfs2 -j3 -p lock_dlm -t csmb:ctdb_state /dev/csmb_vg/ctdb_lv
Note that this command specifies a different lock table name than the lock table in the example that created the filesystem on /dev/csmb_vg/csmb_lv. This distinguishes the lock table names for the different devices used for the file systems.
The output of the mkfs.gfs2 appears as follows:
This will destroy any data on /dev/csmb_vg/ctdb_lv.
  It appears to contain a gfs2 filesystem.

Are you sure you want to proceed? [y/n] y

Device:
/dev/csmb_vg/ctdb_lv
Blocksize:          4096
Device Size         1.00 GB (262144 blocks)
Filesystem Size:    1.00 GB (262142 blocks)
Journals:       3
Resource Groups:    4
Locking Protocol:   "lock_dlm"
Lock Table:         "csmb:ctdb_state"
UUID:
  BCDA8025-CAF3-85BB-B062-CC0AB8849A03
In this example, the /dev/csmb_vg/ctdb_lv file system will be mounted at /mnt/ctdb on all nodes. This mount point must match the value that you specify as the location of the .ctdb.lockfile with the CTDB_RECOVERY_LOCK option in the /etc/sysconfig/ctdb file, as described inSection 11.4, “CTDB Configuration”.

11.4. CTDB Configuration

The CTDB configuration file is located at /etc/sysconfig/ctdb. The mandatory fields that must be configured for CTDB operation are as follows:
  • CTDB_NODES
  • CTDB_PUBLIC_ADDRESSES
  • CTDB_RECOVERY_LOCK
  • CTDB_MANAGES_SAMBA (must be enabled)
  • CTDB_MANAGES_WINBIND (must be enabled if running on a member server)
The following example shows a configuration file with the mandatory fields for CTDB operation set with example parameters:
CTDB_NODES=/etc/ctdb/nodes
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_RECOVERY_LOCK="/mnt/ctdb/.ctdb.lock"
CTDB_MANAGES_SAMBA=yes
CTDB_MANAGES_WINBIND=yes
The meaning of these parameters is as follows.
CTDB_NODES
Specifies the location of the file which contains the cluster node list.
The /etc/ctdb/nodes file that CTDB_NODES references simply lists the IP addresses of the cluster nodes, as in the following example:
192.168.1.151
192.168.1.152
192.168.1.153
In this example, there is only one interface/IP on each node that is used for both cluster/CTDB communication and serving clients. However, it is highly recommended that each cluster node have two network interfaces so that one set of interfaces can be dedicated to cluster/CTDB communication and another set of interfaces can be dedicated to public client access. Use the appropriate IP addresses of the cluster network here and make sure the hostnames/IP addresses used in the cluster.conf file are the same. Similarly, use the appropriate interfaces of the public network for client access in the public_addresses file.
It is critical that the /etc/ctdb/nodes file is identical on all nodes because the ordering is important and CTDB will fail if it finds different information on different nodes.
CTDB_PUBLIC_ADDRESSES
Specifies the location of the file that lists the IP addresses that can be used to access the Samba shares exported by this cluster. These are the IP addresses that you should configure in DNS for the name of the clustered Samba server and are the addresses that CIFS clients will connect to. Configure the name of the clustered Samba server as one DNS type A record with multiple IP addresses and let round-robin DNS distribute the clients across the nodes of the cluster.
For this example, we have configured a round-robin DNS entry csmb-server with all the addresses listed in the /etc/ctdb/public_addresses file. DNS will distribute the clients that use this entry across the cluster in a round-robin fashion.
The contents of the /etc/ctdb/public_addresses file on each node are as follows:
192.168.1.201/0 eth0
192.168.1.202/0 eth0
192.168.1.203/0 eth0
This example uses three addresses that are currently unused on the network. In your own configuration, choose addresses that can be accessed by the intended clients.
Alternately, this example shows the contents of the /etc/ctdb/public_addresses files in a cluster in which there are three nodes but a total of four public addresses. In this example, IP address 198.162.2.1 can be hosted by either node 0 or node 1 and will be available to clients as long as at least one of these nodes is available. Only if both nodes 0 and 1 fail does this public address become unavailable to clients. All other public addresses can only be served by one single node respectively and will therefore only be available if the respective node is also available.
The /etc/ctdb/public_addresses file on node 0 includes the following contents:
198.162.1.1/24 eth0
198.162.2.1/24 eth1
The /etc/ctdb/public_addresses file on node 1 includes the following contents:
198.162.2.1/24 eth1
198.162.3.1/24 eth2
The /etc/ctdb/public_addresses file on node 2 includes the following contents:
198.162.3.2/24 eth2
CTDB_RECOVERY_LOCK
Specifies a lock file that CTDB uses internally for recovery. This file must reside on shared storage such that all the cluster nodes have access to it. The example in this section uses the GFS2 file system that will be mounted at /mnt/ctdb on all nodes. This is different from the GFS2 file system that will host the Samba share that will be exported. This recovery lock file is used to prevent split-brain scenarios. With newer versions of CTDB (1.0.112 and later), specifying this file is optional as long as it is substituted with another split-brain prevention mechanism.
CTDB_MANAGES_SAMBA
When enabling by setting it to yes, specifies that CTDB is allowed to start and stop the Samba service as it deems necessary to provide service migration/failover.
When CTDB_MANAGES_SAMBA is enabled, you should disable automatic init startup of the smb and nmb daemons by executing the following commands:
[root@clusmb-01 ~]# chkconfig snb off
[root@clusmb-01 ~]# chkconfig nmb off
CTDB_MANAGES_WINBIND
When enabling by setting it to yes, specifies that CTDB is allowed to start and stop the winbind daemon as required. This should be enabled when you are using CTDB in a Windows domain or in active directory security mode.
When CTDB_MANAGES_WINBIND is enabled, you should disable automatic init startup of the winbind daemon by executing the following command:
[root@clusmb-01 ~]# chkconfig windinbd off

REDHAT 홈페이지에서 가져옴

댓글 남기기

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Videos, Slideshows and Podcasts by Cincopa Wordpress Plugin