Posts

Showing posts from February, 2014

CoD upgrade

Activating Capacity Upgrade on Demand When you purchase one or more activation features, you will receive corresponding activation codes to permanently activate your inactive processors or memory units. To permanently activate your inactive resources by retrieving and entering your activation code: 1. Retrieve the activation code by going to http://www-912.ibm.com/pod/pod 2. Enter the system type and serial number of your server. 3. Record the activation code that is displayed on the Web site. 4. Enter your activation code on your server using the HMC. To enter your code: a. In the navigation area of the HMC window, expand Systems Management. b. Select Servers. c. In the contents area, select the server on which you want enter your activation code. d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code. e. Type your activation code in the Code field. f. Click OK.

Differences between JFS and Enhanced JFS

There are many differences between JFS and Enhanced JFS. Function JFS Enhanced JFS Optimization 32-bit kernel 64-bit kernel Maximum file system size 32 terabyte 4 petabytes Note: This is an architectural limit. AIX® currently only supports up to 16 terabytes. Maximum file size 64 gigabytes 4 petabytes Note: This is an architectural limit. AIX currently only supports up to 16 terabytes. Number of I-nodes Fixed at file system creation Dynamic, limited by disk space Large file support As mount option Default Online defragmentation Yes Yes namefs Yes Yes DMAPI No Yes Compression Yes No Quotas Yes Yes Deferred update Yes No Direct I/O support Yes Yes Note: Cloning with a system backup with mksysb from a 64-bit enabled JFS2 system to a 32-bit system will not be successful. Unlike the JFS file system, the JFS2 file system will not allow the link() API to be used on its binary type directory. This limitation may c

Setup a Two-Node Cluster with HACMP

Image
Setup a Two-Node Cluster with HACMP Contents Introduction Setup and Preparation Storage setup Network setup Installation Prerequisite filesets HACMP Filesets Cluster Topology Configuration Define the Cluster Define the Cluster Nodes Define Cluster Sites Define a Cluster Network Add a Communication Interface for Heartbeat Add Persistent IP Addresses Storage Configuration Disk Heartbeat Resource Group Configuration Application Volume Groups Application Server Cluster Service Address Define Resource Group(s) Create LVs and Filesystems for Applications Failover Test Disk Heartbeat Check Useful Commands »clstat« and »snmp« Related Information 1. Introduction This article describes how to setup a two-nodes-cluster with IBM's standard cluster solution for AIX. Although the name has changed to Power HA with Version 5.5 and to Power HA Sy

Shared Ethernet Adapter (SEA) Failover with Load Balancing

Image
Update: The developers and the manuals call this Load Sharing but most people think it is called Load Balancing. Perhaps, balancing gives the wrong impression of fine grain packet by packet balancing where we actually have higher level, cruder splitting of the work with Sharing. Below I use the word Balancing but mean Sharing. I have got a few questions recently on how to set this up as there are announcement with near zero information on setup, the configuration needed and a worked example. So here goes. For a long time now we have had SEA Failover where the VIOS pair work together to provide a simple to set up at the client VM (LPAR) redundant path to the network.  A single virtual Ethernet network is managed between two Virtual I/O Servers (VIOS). The one with the higher priority (lower number) is the primary and does all the network bridging I/O and the secondary does nothing unless the primary is taken down or fails. Then the secondary takes over and does all t

How to Setup SEA Failover on DUAL VIO servers?

What needs to be done? Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked . This enables the SEA to provide bridging functionality between the two VIO servers. Note:  SEAs has the same PVID, but will have a different priority value. Control Channel: An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, it must be specified in each SEA when configured in ha_mode.  The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place Limitation : SEA Failover was introduced with Fixpack 7 (Virtual I/O server version 1.2), so both Virtual I/O Servers need to be at this minimum level. Steps : Create the Virtual ethernet adapter with the following option on the VIOS1 . virtual adapter a unique (Port Virtu

How to configure NPIV (N_Port ID Virtualization)

Image
Step By Step NPIV configuration For maximum redundancy for the paths create the instance on dual VIOS. We will consider an scenario having Power6/7 Server, with 2 PCI Dual/Single port 8 GB Fibre Card with VIOS level – 2,2 FP24 installed and VIOS is in shutdown state. First we need to create Virtual fibre channel adapter on each VIOS which we will later on map to physical fibre adapter after logging into VIOS similarly as we do for Ethernet Please Note: - Create the all lpar clients as per requirements and then configure the Virtual fiber adapter on VIOS. Since we are mapping one single physical fiber adapter to different hosts, hence we need to create that many virtual fiber channel adapter. Dynamically virtual fiber channel adapter can be created but don’t forget to add in profile else you lost the config on power-off. 1.       1. Create Virtual fibre channel adapter on both VIOS server .           HMC--> Managed System-->Manage Profile-->Virtual