Posts

Showing posts from September, 2013

NIM Basics

Image
Basics Master (NIM master): The one and only one machine in a NIM environment that has permission to run commands remotely on NIM clients. The NIM master holds all the NIM resources. A client can only have one master, and a master can not be a client of any other master. The NIM master must be at an equal or higher level than the client. Client (NIM client): Any standalone machine or lpar in a NIM environment other than the NIM master. Clients use resources that reside on the NIM master to perform various software maintenance, backup ... Resource (NIM resources): This can be a single file or a whole filesystem that is used to provide some sort of information to, or perform an operation on a NIM client. Resources are allocated to NIM clients using NFS and can be allocated to multiple clients at the same time. Resources can be: mksysb, spot, lpp_source, machines... Allocate/Allocation: This process is what allows your NIM client to access resources in NIM. The master uses

AIX History

IBM had 2 discrete Power Architecture hardware lines, based on different Operating Systems:     - OS/400, later i5/OS, more later IBM i     - AIX (on the same hardware it is possible to run Linux as well) I. 1986-1990 (AS/400 - IBM RT): In 1986 AIX Version 1 had been introduced for the IBM 6150 RT workstation, which was based on UNIX. In 1987 for the other product line: OS/400 (later i5/OS and IBM i), the platform (hardware) AS/400 had been released. II. 1990-1999 (RS/6000): Among other variants, IBM later produced AIX Version 3 (also known as AIX/6000), for their IBM POWER-based RS/6000 platform. The RS/6000 family replaced the IBM RT computer platform in February 1990 and was the first computer line to see the use of IBM's POWER and PowerPC based microprocessors. Since 1990, AIX has served as the primary operating system for the RS/6000 series. AIX Version 4, introduced in 1994, added symmetric multiprocessing with the introduction of the first RS/6000 SMP ser

GPFS Cluster

Image
How I built a gpfs cluster Scenario: I will build the initial cluster using 2 Shared SAN disks connected to two lpars of a P550 Server and then I will add another node from a p550 Lpar with no SAN disk and will create few sets of GPFS filesystem using internal disks. Later I will add a new disk to the 1st GPFS filesystem. If time permits I will add at least one more node to the cluster. So my final cluster will have one primary configuration server node one secondary configuration server node. You will notice that sometimes I ran commands from different server. It really doesn't matter for GPFS as you can run any command from any node. I had several ssh session open so whatever window I found open I used that window. Step 1. I will install GPFS software and latest fix packs to the 1st node which will be primary configuration server too. I copied all the GPFS software and downloaded latest available fix packs to /utility/SOFT/gpfs directory. Here is the list of all t