Posts

Configuring MAC binding in DHCP Server

Configuring MAC binding in DHCP server means permanently assigning static Internet protocol (IP) to the DHCP client using client MAC address. We don’t want to give automatic IP address to servers, which are service providers. As a example if a NFS or Samba Server IP got changed automatically after a reboot are Network restart then all client who are acessing NFS and Samba shares can’t be accessible using old IP address each and every time we have to intimate to the employees if server IP address changed.  Not only about accessing the NFS and Samba shares some of the shares maybe used for hosting of application. Hot coded links in HTML/PHP intranets all things get effected due an single IP address change. Our goal is to set static IP address to DHCP client (server) using DHCP server configuration, which is called as configuring MAC binding first step is to configure DHCP server.

Zimbra mail server installation and configuration RHEL7 / Centos 7

Image
Zimbra mail server is an free email server for open source users, which will provide calender and collaboration solution. Zimbra mail server has GUI interface to manage administrator console. The Zimbra Collaboration includes the Zimbra MTA, the Zimbra LDAP server, and the Zimbra mailbox server. In a single-server installation, all components are installed on one server and require no additional manual configuration. This installation guide is a quick start guide that describes the basic steps needed to install and configure Zimbra Collaboration in a direct network connect environment. In this environment, the Zimbra server is assigned a domain for which it receives mail, and a direct network connection to the Internet. When Zimbra Collaboration is installed, you will be able to log on to the Zimbra administration console to manage the domain and provision accounts. In this tutorial / article we will explain how to install and configure zimbra mail server in RHEL 7.x /

Linux and Solaris Routing Basics

Creating routes in Linux: Basic Linux routing Do you know that Linux OS can use as a router? We will see how to implement routing on Linux box here. When I am working for Cisco Systems they use Linux servers as routers in their test environments in order to communicate between routers in Lab environment. Even there is a project called Linux routing and a separate Linux flavor for routing purpose, that's the flexibility of Linux. Let us start some basic routing commands. In Linux/*nix every work can be done in two ways. 1) That is a temporary way (after reboots these changes will not be there) and 2) The other is permanent way (after reboots too, the changes will be there). We will see how to add routes temporary as well as permanent way today.

Mapping Virtual Fibre Channel Adapters on VIO Servers

A new LPAR was created and new virtual fibre channel adapters were presented to both VIO servers using DLPAR. Now, it’s time to map the newly created virtual fibre channel adapter to a physical fibre channel adapter. But which vfchost device to map? What are checks needed to be done? I’ll step you through the process of mapping NPIV virtual fibre channel adapter to a physical adapter on the VIO server. Check Your Virtual Adapter (vfchost) In this example, I have performed a DLPAR of virtual fibre channel adapter with the ID 38 on both the VIO servers. We now need to identify the vfchost device presented to the VIO server in order to be able to map them later. Use the lsdev command with the –slots flag in the VIO restricted shell. 

NPIV (Virtual Fibre Channel Adapter) Concept

Image
NPIV (Virtual Fibre Channel Adapter) With NPIV, you can configure the managed system so that multiple logical partitions can access independent physical storage through the same physical fibre channel adapter. (NPIV means N_Port ID Virtualization. N_Port ID is a storage term, for node port ID, to identify ports on the nod (FC Adpater) in the SAN area.) To access physical storage in a typical storage area network (SAN) that uses fibre channel, the physical storage is mapped to logical units (LUNs) and the LUNs are mapped to the ports of physical fibre channel adapters. Each physical port on each physical fibre channel adapter is identified using one worldwide port name (WWPN). NPIV is a standard technology for fibre channel networks that enables you to connect multiple logical partitions to one physical port of a physical fibre channel adapter. Each logical partition is identified by a unique WWPN, which means that you can connect each logical partition to independent

How to scan the new lun in AIX and RHEL?

How to scan the new LUN in AIX and RHEL? For AIX: Make sure you are taking the below output before scan for new lun/disk. #lspv #lspv|wc -l Now execute the below commands to scan the new lun/disk. #cfgmgr check the new lun/disk added to the box with the help of new outputs of the following compared with the old output. #lspv #lspv|wc -l For RHEL Linux: Make sure you are taking the following output before scan for new lun. (Knowing the newly added Lun size before we scan would be better) fdisk -l cat /proc/scsi/scsi cat /proc/scsi/scsi|grep -i host|wc -l multipath -l tail -50 /var/log/messages Now execute the below commands to scan the new lun. syntax:       echo "- - -" > /sys/class/scsi_host/host(n)/scan                     #echo "- - - " > /sys/class/scsi_host/host0/scan (Make sure the space is there between the hyphen in the echo command [ echo "- - -" ]  and  you should do this for all HBAs. check the new lu

Devices In AIX

Image
Devices In AIX  Objectives for the module Understand Pre-Defined and Customized Devices Databases Describe the states of a device Logical and physical devices Understand device location codes How to add/change/delete devices Understanding Devices

How to add IP alias in AIX?

Adding IP alias in AIX Using "smitty" we can configure the IP alias in AIX. It is better to use “smitty tcpip” to check and verify configuration and interfaces. Steps using SMITTY: smitty tcpip --> Further configuration -->Network interface -->Network interface selection -->Configure alias --> Add an IPV4 Network Alias (Here select the available interface  and press enter and the Next screen, we need to insert an IP address and the relative subnet mask and press enter.) At the end of configuration commands we get the status (“OK” if everything is ok) Validation:  Execute the  #ifconfig -a  command and confirm the newly added IP alias has been present. Steps using CLI: To temporarily add the IP alias by ifconfig: ( Syntax )         #ifconfig alias netmask up (For example) : #ifconfig en0 alias 192.168.4.75 netmask 255.255.255.0 up To remove the temporarily added IP alias by ifconfig: (Syntax)          #ifconfig delete  

Network

NETWORK CONFIGURATION AT BOOT TIME: 1. /etc/rc.net       Configures and starts TCP/IP interfaces. Sets hostname, default gateway and static routes.(it is called by cfgmgr) then during initialization, the file /etc/inittab is called. There are 2 entries:         ...         rctcpip:23456789:wait:/etc/rc.tcpip > /dev/console 2>&1 # Start TCP/IP daemons         rcnfs:23456789:wait:/etc/rc.nfs > /dev/console 2>&1 # Start NFS Daemons         ... 2. /etc/rc.tcpip       starts TCP/IP daemons (sendmail, portmap, inetd, etc., and other daemons: syslogd, lpd ...) 3. /etc/inetd.conf     when inetd started, it reads its configuration from this file  contains the name of the services that inetd listens for requests and starts as needed

QUICK SETUP GUIDE FOR HACMP/POWERHA

Use this procedure to quickly configure an HACMP cluster, consisting of 2 nodes and disk heart-beating. Prerequisites: Make sure you have the following in place: Have the IP addresses and host names of both nodes, and for a service IP label. Add these into the /etc/hosts files on both nodes of the new HACMP cluster. Make sure you have the HACMP software installed on both nodes. Just install all the filesets of the HACMP CD-ROM, and you should be good. Make sure you have this entry in /etc/inittab (as one of the last entries): clinit:a:wait:/bin/touch /usr/es/sbin/cluster/.telinit In case you're using EMC SAN storage, make sure you configure you're disks correctly as hdiskpower devices. Or, if you're using a mksysb image, you may want to follow this procedure EMC ODM cleanup.

CoD upgrade

Activating Capacity Upgrade on Demand When you purchase one or more activation features, you will receive corresponding activation codes to permanently activate your inactive processors or memory units. To permanently activate your inactive resources by retrieving and entering your activation code: 1. Retrieve the activation code by going to http://www-912.ibm.com/pod/pod 2. Enter the system type and serial number of your server. 3. Record the activation code that is displayed on the Web site. 4. Enter your activation code on your server using the HMC. To enter your code: a. In the navigation area of the HMC window, expand Systems Management. b. Select Servers. c. In the contents area, select the server on which you want enter your activation code. d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code. e. Type your activation code in the Code field. f. Click OK.

Differences between JFS and Enhanced JFS

There are many differences between JFS and Enhanced JFS. Function JFS Enhanced JFS Optimization 32-bit kernel 64-bit kernel Maximum file system size 32 terabyte 4 petabytes Note: This is an architectural limit. AIX® currently only supports up to 16 terabytes. Maximum file size 64 gigabytes 4 petabytes Note: This is an architectural limit. AIX currently only supports up to 16 terabytes. Number of I-nodes Fixed at file system creation Dynamic, limited by disk space Large file support As mount option Default Online defragmentation Yes Yes namefs Yes Yes DMAPI No Yes Compression Yes No Quotas Yes Yes Deferred update Yes No Direct I/O support Yes Yes Note: Cloning with a system backup with mksysb from a 64-bit enabled JFS2 system to a 32-bit system will not be successful. Unlike the JFS file system, the JFS2 file system will not allow the link() API to be used on its binary type directory. This limitation may c

Setup a Two-Node Cluster with HACMP

Image
Setup a Two-Node Cluster with HACMP Contents Introduction Setup and Preparation Storage setup Network setup Installation Prerequisite filesets HACMP Filesets Cluster Topology Configuration Define the Cluster Define the Cluster Nodes Define Cluster Sites Define a Cluster Network Add a Communication Interface for Heartbeat Add Persistent IP Addresses Storage Configuration Disk Heartbeat Resource Group Configuration Application Volume Groups Application Server Cluster Service Address Define Resource Group(s) Create LVs and Filesystems for Applications Failover Test Disk Heartbeat Check Useful Commands »clstat« and »snmp« Related Information 1. Introduction This article describes how to setup a two-nodes-cluster with IBM's standard cluster solution for AIX. Although the name has changed to Power HA with Version 5.5 and to Power HA Sy

Shared Ethernet Adapter (SEA) Failover with Load Balancing

Image
Update: The developers and the manuals call this Load Sharing but most people think it is called Load Balancing. Perhaps, balancing gives the wrong impression of fine grain packet by packet balancing where we actually have higher level, cruder splitting of the work with Sharing. Below I use the word Balancing but mean Sharing. I have got a few questions recently on how to set this up as there are announcement with near zero information on setup, the configuration needed and a worked example. So here goes. For a long time now we have had SEA Failover where the VIOS pair work together to provide a simple to set up at the client VM (LPAR) redundant path to the network.  A single virtual Ethernet network is managed between two Virtual I/O Servers (VIOS). The one with the higher priority (lower number) is the primary and does all the network bridging I/O and the secondary does nothing unless the primary is taken down or fails. Then the secondary takes over and does all t