Showing posts with the label AIX

How to scan the new lun in AIX and RHEL?

How to scan the new LUN in AIX and RHEL? For AIX: Make sure you are taking the below output before scan for new lun/disk. #lspv #lspv|wc -l Now execute the below commands to scan the new lun/disk. #cfgmgr check the new lun/disk added to the box with the help of new outputs of the following compared with the old output. #lspv #lspv|wc -l For RHEL Linux: Make sure you are taking the following output before scan for new lun. (Knowing the newly added Lun size before we scan would be better) fdisk -l cat /proc/scsi/scsi cat /proc/scsi/scsi|grep -i host|wc -l multipath -l tail -50 /var/log/messages Now execute the below commands to scan the new lun. syntax:       echo "- - -" > /sys/class/scsi_host/host(n)/scan                     #echo "- - - " > /sys/class/scsi_host/host0/scan (Make sure the space is there between the hyphen in the echo command [ echo "- - -" ]  and  you should do this for all HBAs. check the new lu

Devices In AIX

Devices In AIX  Objectives for the module Understand Pre-Defined and Customized Devices Databases Describe the states of a device Logical and physical devices Understand device location codes How to add/change/delete devices Understanding Devices

How to add IP alias in AIX?

Adding IP alias in AIX Using "smitty" we can configure the IP alias in AIX. It is better to use “smitty tcpip” to check and verify configuration and interfaces. Steps using SMITTY: smitty tcpip --> Further configuration -->Network interface -->Network interface selection -->Configure alias --> Add an IPV4 Network Alias (Here select the available interface  and press enter and the Next screen, we need to insert an IP address and the relative subnet mask and press enter.) At the end of configuration commands we get the status (“OK” if everything is ok) Validation:  Execute the  #ifconfig -a  command and confirm the newly added IP alias has been present. Steps using CLI: To temporarily add the IP alias by ifconfig: ( Syntax )         #ifconfig alias netmask up (For example) : #ifconfig en0 alias netmask up To remove the temporarily added IP alias by ifconfig: (Syntax)          #ifconfig delete  

CoD upgrade

Activating Capacity Upgrade on Demand When you purchase one or more activation features, you will receive corresponding activation codes to permanently activate your inactive processors or memory units. To permanently activate your inactive resources by retrieving and entering your activation code: 1. Retrieve the activation code by going to 2. Enter the system type and serial number of your server. 3. Record the activation code that is displayed on the Web site. 4. Enter your activation code on your server using the HMC. To enter your code: a. In the navigation area of the HMC window, expand Systems Management. b. Select Servers. c. In the contents area, select the server on which you want enter your activation code. d. Select Tasks > Capacity on Demand (CoD) > Enter CoD Code. e. Type your activation code in the Code field. f. Click OK.

Differences between JFS and Enhanced JFS

There are many differences between JFS and Enhanced JFS. Function JFS Enhanced JFS Optimization 32-bit kernel 64-bit kernel Maximum file system size 32 terabyte 4 petabytes Note: This is an architectural limit. AIX® currently only supports up to 16 terabytes. Maximum file size 64 gigabytes 4 petabytes Note: This is an architectural limit. AIX currently only supports up to 16 terabytes. Number of I-nodes Fixed at file system creation Dynamic, limited by disk space Large file support As mount option Default Online defragmentation Yes Yes namefs Yes Yes DMAPI No Yes Compression Yes No Quotas Yes Yes Deferred update Yes No Direct I/O support Yes Yes Note: Cloning with a system backup with mksysb from a 64-bit enabled JFS2 system to a 32-bit system will not be successful. Unlike the JFS file system, the JFS2 file system will not allow the link() API to be used on its binary type directory. This limitation may c

Shared Ethernet Adapter (SEA) Failover with Load Balancing

Update: The developers and the manuals call this Load Sharing but most people think it is called Load Balancing. Perhaps, balancing gives the wrong impression of fine grain packet by packet balancing where we actually have higher level, cruder splitting of the work with Sharing. Below I use the word Balancing but mean Sharing. I have got a few questions recently on how to set this up as there are announcement with near zero information on setup, the configuration needed and a worked example. So here goes. For a long time now we have had SEA Failover where the VIOS pair work together to provide a simple to set up at the client VM (LPAR) redundant path to the network.  A single virtual Ethernet network is managed between two Virtual I/O Servers (VIOS). The one with the higher priority (lower number) is the primary and does all the network bridging I/O and the secondary does nothing unless the primary is taken down or fails. Then the secondary takes over and does all t

How to Setup SEA Failover on DUAL VIO servers?

What needs to be done? Each SEA must have at least one virtual Ethernet adapter with the “Access external network” flag (previously known as “trunk” flag) checked . This enables the SEA to provide bridging functionality between the two VIO servers. Note:  SEAs has the same PVID, but will have a different priority value. Control Channel: An additional virtual Ethernet adapter , which belongs to a unique VLAN on the system, is used to create the control channel between the SEAs, it must be specified in each SEA when configured in ha_mode.  The purpose of this control channel is to communicate between the two SEA adapters to determine when a fail over should take place Limitation : SEA Failover was introduced with Fixpack 7 (Virtual I/O server version 1.2), so both Virtual I/O Servers need to be at this minimum level. Steps : Create the Virtual ethernet adapter with the following option on the VIOS1 . virtual adapter a unique (Port Virtu

AIX: LVM Overview

LVM Theory Physical Volume = PV is IBM speak for a disk (it could be worse some IBMers still refer to DASD! (Dyanical Access Storage Device which is a main frame term). They are: Named by AIX as hdisk0, hdisk1, hdisk2, ... Regardless of the underlying technology (SCSI, SSA sort of IBM's early SAN, SAN, RAID5 using the adapter) Disk and AIX automatic bad block reallocation Volume Group = VG is IBM speak for a group of disks Volume Group operations: Disk space always allocated within single VG All disks available in AIX or none - work as a group Can be exported to be attached to other AIX - allows High Availability HACMP First VG called rootvg Root Volume Group (rootvg) is created automatically while installing AIX is placed within this VG AIX files Initial paging Usually only the first disk Or two, to allow mirroring of rootvg Often internal disks Recommend: Keep to a small number of disks Other Volume Groups Other VG created by the System Admin.