Dec 25, 2009

BEST PRACTICE: "Choosing the Right Base Class for your MP Classes" by Dan Rogers


The first thing you learn when writing a management pack is that you need this thing called "classes" to make your managment pack work.  This is a confusing term, and often throws new authors for a loop because of the conceptual similarity to object oriented programming.
 
 
The similarities are pretty deep - and in fact the System Center team made an effort to make Operations Manager more "object oriented" in version 3 to try and get some of the gains that you get when designing a system in an object oriented way.  These gains include "free behaviors" (simplifying development of management packs), easier management for your customer (due to the user interface in the Operators Console being able to report on the health of things rather than just tabular reports of alerts and symptoms.  Classes, properties and inheritence are the artifacts that you need to learn about in order to get these gains.
 
One of the first choices you make when designing a class for your management pack is to decide on which base class it should inherit from.  Base classes come from libraries of predefined behaviors that are supplied as part of the Operations Manager infrastructure.  There are dozens and dozens of possible base classes, and you can even use our own classes as new base classes to derive other classes from - it gets confusing really quick however because the "class picker" in the authoring tools simply presents a list of all possible base classes.  You essentially have to be an expert on the Ops Manager class library in order to be sure of what you get in the end. 
 
For a management pack designer, having to be an expert to get started is a pretty daunting problem. 
 
In this article we present  simple list of "Do's" for choosing the right parent class to choose.  In most cases the base class types listed here are going to be the ones you always want to use.
 
Class Name
Library Reference
When to use
Benefits
Microsoft.Windows.Library.ComputerRole
System
This should be your first choice.  If your product is sufficiently important to the customer that they would consider it a stand-alone product role (e.g. a web server, an ERP system, an infrastructure service that other computers depend on), then this is a good choice.
 
Side Effects:  Always gets a "Hosted" relationship to the computer the product is installed on.
 
Always adds a column into the Computers View
Automatically rolls up health to the computer (real or virtual) that hosts the application.
 
If you want your application to show up in the computer view as a column, and have automatic health roll up at the computer level, always choose this class first as the parent of your classes.
Microsoft.Windows.Library.LocalApplication
System
Choose this when the application or product runs on a windows computer and can be run with other applications on the same computer.  In the case of a minor application that the customer typically would install on multiple computers but not dedicate computers or virtuals to, this is a good choice
 
Side Effects:  Does not automatically roll up to computer health.  Has a "hosted" relationship to the computer.
This is another easy base class choice for most management packs for non-workload related products.
 
Inherits from ComputerRole, so the health of the application will show up when the customer drills into the computer-role within a computer health view.
Microsoft.Windows.Library.ApplicationComponent
System
Choose this when the elements whose health you want to show to your customers is a part of a larger system that cannot be deployed and operated independently of other more standalone parts of your product.  An example is the SQL Agent service, which doesn't make sense to deploy without the SQL Server product.
 
Side Effects: Can roll up to either ApplicationComponent or ComputerRole easily. 
Use this when you want to declare a hosted by relationship to other classes in your management pack (or that are public in other management packs), and want a automatic health roll up to the higher level classes in your class hierarchy.
System.Service
or
System.ComputerRole
System
Choose one of these when you want to represent a logical element of your application with fine grain control over hosting relationships and health roll up.
 
Side Effects
System.Service allows your class to be used from within the distributed app designer at customer locations.
System.ComputerRole has no known side effects.
If the three choices above don't make sense, consider this one next.
System.Perspective
System
This is used to drive a roll up across multiple distributed elements of the application and represents an element of service that can be impacted by a number of classes that are physically deployed on separate computers.
 
Side Effect:  Always hosted on the Root Management Server.
Use this class when you want to project a roll up of health states for an element of service.  A good example might be "availability" as a rolled up health state for a load balanced web site.  If any of the instances represented by the perspective element are up, you can show that your service is up, even if some of the elements are not healthy. 
Common base class derivations for application/product elements
 
Other types of base classes exist for describing things like groups of instances, and groups of computers.  Typically groups are used to give your customers the abilty to constrain monitoring to a set of like elements, rather then let the domain wide discovery just start monitoring everything your mangement pack covers no matter what importance, relevance or status that has.  Think of the case where you want to add monitoring to business critical web sites - but don't care to have the same level of operator attention paid to non-essential services.
 
A good practice is to always think about adding a group to your MP. This would let customers decide on sub-sets of computers or discovered classes.  Customers can then use the group to limit the scope of the monitoring.  If this is a goal, make sure you ship the appropriate discoveries as public accessibility and with the enabled property set to false.  Doing this lets the customer easiliy create an override that enables the discovery only in the context of the group.  If you ship the discovery enabled, importing the MP will cause the discovery to run and this limits the effectiveness of groups.
 
 
Another way to do this is by using script based discovery and having the discovery manage the group content.  System.Service makes a good choice for this discovery populated type of group.
 
Having a computer group (or other group) that targets your classes then lets the customer decide which computers (or web sites, etc) matter by adding those to the group that you provide, and then adding creating an override on the Enabled property for the discovery property in that group.

Making groups of logical disks – an example from simple to advanced


I have been seeing this question come up a lot lately – as customers try and create groups of their disks – in order to create overrides for “certain” disks.  So – I am creating this post to give some real world examples.

Well – I will start this simply.  Say we want to create a group of all logical disks, with the drive letters of C: and D:?
I would start with creating a new group – and adding the “Windows Server 2003 Logical Disk” class.  Now – I could just use the parent class of “Logical Disk” instead of the OS specific class if I wanted.  The only issue with that is that most monitors targeting a disk – are OS specific – and duplicated three times.  So it is best to create specific groups for these – but totally not required.

Ok – so in the Dynamic Members query builder – I click add, and pick a property.  Since I know “Device Name” contains the drive letter – this will do nicely.  I select device name “Equals” “C:”. 
image

Now – I want to also include D:.  There are many way to do – this – and I will go through them.  First – I could simply Insert a new line for Windows Server 2003 Logical Disk – and replicate the line I have – adding one for D:

image

Only one problem – this is an “AND” grouping – I really need this to be an “OR” grouping to include both C: and D: drives.  You can switch this grouping the in UI, just right click the word “AND” and change it to an OR grouping:

image

Voila!

image

This formula now looks like: 
( Object is Windows Server 2003 Logical Disk AND ( Device Name Equals C: ) OR ( Device Name Equals D: ) )
Save your group – then right click it – and choose “View Group Members”.  This will ensure we are cooking with gas.  It should contain all your Windows 2003 based C: and D: volumes.
image


So far – so good.
Now – what if I ONLY want C: and D: disks, that are HOSTED by specific Windows Computers?  I can do that too!  Lets say I want a group – of all the C: and D: logical disks, on servers that begin with the name “SR______”
If you look at the bottom of the list of properties for Logical Disks – you will see (Host=Windows Computer).  From here – we can pick any attribute of the Windows Computer class as well to add to our expression – to limit our logical disks in our group to very specific Computers. 

image


Go back to the properties of your group, edit the Dynamic Members, and you can construct something like this:

image

Which translates to the following formula:
( Object is Windows Server 2003 Logical Disk AND ( Windows Computer.NetBIOS Computer Name Matches wildcard sr* ) AND ( ( Device Name Equals C: ) OR ( Device Name Equals D: ) ) )
Now – I will be honest – getting all the “ands” and “ors” in the right place using the UI is a big pain.  It is very easy to screw it up.  I like to simplify this to the fewest lines possible – using Regex.
Using Regular Expressions – we can use modifiers to create very advanced expressions.  my favorites are ^ which means the beginning of a new line or word, and | which is the “pipe” symbol – which means “or”.

So a simple way to accomplish the same example above – without all the complexity – is this:

image

WAY simpler!
However – you might notice – this doesn't work right.  This is because Regex is case sensitive.  If the Server NetBIOS name is detected in all CAPS, this expression wont match.  I talk a little about this issue in this post:  http://blogs.technet.com/kevinholman/archive/2009/04/21/quick-tip-using-regular-expressions-in-a-dynamic-group.aspx
So – based on that posts example – there is a simple way to make a RegEx case insensitive:  (?i:blah)
Using that as an example – we can now make very advanced groupings, quite easily:

image

(?i: to make it case insensitive.  ^ to signify the beginning of the word/line match.   Here is the formula now:

( Object is Windows Server 2003 Logical Disk AND ( Device Name Matches regular expression (?i:^C|^D) ) AND ( Windows Computer.NetBIOS Computer Name Matches regular expression (?i:^sr) ) )

Check it out:

image

Victory!

What if I wanted all logical disks that we NOT hosted by a Virtual Machine?  Easy!

image

( Object is Logical Disk AND ( Windows Computer.Virtual Machine Equals False ) AND True )
This reveals a group of ALL logical disks hosted by a Windows Computer with the attribute of Virtual Machine = False:

image

As you can see – using the Hosting relationship of the disk – to the Windows Computer object, there is much more you can do with groups.

Writing monitors to target Logical or Physical Disks (SCOM)


This is something a LOT of people make mistakes on – so I wanted to write a post on the correct way to do this properly, using a very common target as an example.
When we write a monitor for something like “Processor\% Processor Time\_Total” and target “Windows Server Operating System”…. everything is very simple.  “Windows Server Operating System” is a single instance target…. meaning there is only ONE “Operating System” instance per agent.  “Processor\% Processor Time\_Total” is also a single instance counter…. using ONLY the “_Total” instance for our measurement.  Therefore – your performance unit monitors for this example work just like you’d think.

However – Logical Disk is very different.  On a given agent – there will often be MULTIPLE instances of “Logical Disk” per agent, such as C:, D:, E:, F:, etc…   We must write our monitors to take this into account. 
For this reason – we cannot monitor a Logical Disk perf counter, and use “Windows Server Operating System” as the target.  The only way this would work, is if we SPECIFICALLY chose the instance in perfmon.  I will explain:

Bad example #1:
I want to monitor for the perf counter Logical Disk\% Free Space\ so that I can get an alert when any logical disk is below 50% in free space.
I create a new monitor > unit monitor > Windows Performance Counters > Static Thresholds > Single Threshold > Simple Threshold. 
image
I target a generic class, such as “Windows Server Operating System”.
I choose the perf counter I want – and select all instances:
image
And save my monitor.
The problem with this workflow – is that we targeted a multi-instance perf counter, at a single instance target.  This workflow will load on all Windows Server Operating Systems, and parse through all discovered instances.  If an agent only has ONE instance of “Logical Disk” (C:) then this monitor will work perfectly…. if the C: drive does not have enough free space – no issues.  HOWEVER… if an agent has MULTIPLE instances of logical disks, C:, D:, E:, AND those disks have different threshold results… the monitor will “flip-flop” as it examines each instance of the counter.  For example, if C: is running out of space, but D: is not… the workflow will examine C:, turn red, generate an alert, then immediately examine D:, and turn back to green, closing the alert. 
This is SERIOUS.  This will FLOOD your environment with statechanges, and alerts, every minute, from EVERY Operating System.
A quick review of Health Explorer will show what is happening:
This monitor went “unhealthy” and issued an alert at 10:20:58AM for the C: instance:
image
Then went “healthy” in the same SECOND from the _Total Instance:
image
Then flipped back to unhealthy, at the same time – for the D: instance.
image

I think you can see how bad this is.  I find this condition all the time, even in “mature” SCOM implementations… it just happens when someone creates a simple perf threshold monitor but doesn't understand the class model, or multi-instance perf counters.  In an environment with only 500 monitored agents – I can generate over 100,000 state changes – and 50,000 alerts, in an HOUR!!!!

Ok – lesson learned – DONT target a single-instance class, using a multi-instance perf counter.  So – what should I have used?  Well, in this case – I should use something like “Windows 2008 Logical Disk”  But we can still screw that up!  :-)

Bad example #2:
I want to monitor for the perf counter Logical Disk\% Free Space\ so that I can get an alert when any logical disk is below 20% in free space.
I create a new monitor > Unit monitor > Windows Performance Counters > Static Thresholds > Single Threshold > Simple Threshold.
image
I have learned from my mistake in Bad Example #1, so I target a more specific class, such as “Windows Server 2008 Logical Disk”.
I choose the perf counter I want – and select all instances:
image
And save my monitor.
Ack!  The SAME problem!  Why????
The problem is – now, instead of each Operating System instance loading this monitor, and then parsing and measuring each instance, now EACH INSTANCE of logical disk is doing the SAME THING.  This is actually WORSE than before…. because the number of monitors loaded is MUCH higher, and will flood me with even more state changes and alerts than before.
Now if I look at Health Explorer – I will likely see MULTIPLE disks have gone red, and are “flip-flopping” and throwing alerts like never before.
image

When you dig into Health Explorer – you will see – that they are being turned Unhealthy – and it isn't event their drive letter!  I will examining the F: drive monitor:
I can see it was turned unhealthy because of the free space threshold hit on the D: drive!
image
and then flipped back to healthy due to the available space on the C: instance:
image
This is very, very bad.  So – what are we supposed to do???

We need to target the specific class (Windows 2008 Logical Disk) AND then use a Wildcard parameter, to match the INSTANCE name of the perf counter to the INSTANCE name of the “Logical Disk” object.  Make sense?  Such as – match up the “C:” perf counter instance – to the “C:” Device ID of the Logical Disk discovered in SCOM.  This is actually easier than it sounds:

Good example:

I want to monitor for the perf counter Logical Disk\% Free Space\ so that I can get an alert when any logical disk is below 20% in free space.
I create a new monitor > Unit monitor > Windows Performance Counters > Static Thresholds > Single Threshold > Simple Threshold.
image
I have learned from my mistake in Bad Example #1, so I target a more specific class, such as “Windows Server 2008 Logical Disk”.
I choose the perf counter I want – and INSTEAD of select all instances, I learn from my mistake in Bad Example #2.  Instead – this time I will UNCHECK the “All Instances” box, and use the “fly-out” on the right of the “Instance:” box:
image

This fly-out will present wildcard options, which are discovered properties of the Windows Server 2008 Logical Disk class.  You can see all of these if you viewed that class in discovered inventory.  What we need to do now – is use discovered inventory to find a property, that matches the perfmon instance name.  In perfmon – we see the instance names are “C:” or “D:”
image
In Discovered Inventory – looking at the Windows Server 2008 Logical Disk, I can see that “Device ID” is probably a good property to match on:
image

So – I choose “Device ID” from the fly-out, which inserts this parameter wildcard, so that the monitor on EACH DISK will ONLY examine the perf data from the INSTANCE in perfmon that matches the disk drive letter.
image

The wildcard parameter is actually something like this:
$Target/Property[Type="MicrosoftWindowsLibrary6172210!Microsoft.Windows.LogicalDevice"]/DeviceID$
This simply is a reference to the MP that defined the “Device ID” property on the class.

Now – no more flip-flopping, no more statechangeevent floods, no more alert storms opening and closing several times per second.


You can use this same process for any multi-instance perf object.  I have a (slightly less verbose) example using SQL server HERE.

To determine if you have already messed up…. you can look at “Top 20 Alerts in an Operational Database, by Alert Count” and “Historical list of state changes by Monitor, by Day:” which are available on my SQL Query List.  These should indicate lots of alerts, and monitor flip-flop, and should be investigated.

Removing an old product connector (SCOM)


In OpsMgr – there is no simple way to remove a product connector once installed.  There is no delete function in the UI:

 What if you have an old connector that you no longer use?  Or, if you quit using an Engyro connector, and started using a MS branded R2 connector?
It may be necessary to forcibly remove a connector from OpsMgr 2007.

*** Note – this process is not officially supported.  There is no officially supported way to remove an old connector.  It is only being demonstrated here for example purposes.  I have used these steps with customers, and thus far no ill effects were discovered.  These steps were gleaned from a MS newsgroup posting.

Open SQL Server Management Studio from the Start menu and run the following queries on the Operations Manager database.

Step 1: Find the ConnectorID:
select DisplayName,IsInitialized,ConnectorID from Connector,BaseManagedEntity
where Connector.BaseManagedEntityID=BaseManagedEntity.BaseManagedEntityID
This will return 3 Columns: The Display Name, Initialized Flag, and ConnectorID.
Find the ConnectorID of the connector you want to remove.  Copy it to Notepad for safekeeping.

Step 2: Un-Initialize the Connector
If the Connector field "IsInitialized" is 1, then you will need to uninitialize the connector before deleting it.
If IsInitialized is 0, skip to step 3.
Use the p_ConnectorUpdate SP to uninitialize the connector:
The first column is YOUR connectorID, 2nd column is the bookmark (this should be NULL), and the 3rd column is the initialized state (0 in our case)
EXEC p_ConnectorUpdate '5fabdf4c-a1f8-43bf-ac18-e46e20bd470b',NULL,0
Make sure this worked – by re-running the initial query, and ensure the connector initialized state is now 0.

Step 3: Delete the Connector
Use the p_ConnectorDelete SP to delete the connector.
The first parameter is YOUR ConnectorID, the 2nd Parameter is the comments you want added to the alert history when the connector is deleted, and the 3rd parameter is the Modified By field you want added to the alert history. (I use NULL for the optional fields to keep it simple.)
EXEC p_ConnectorDelete '5fabdf4c-a1f8-43bf-ac18-e46e20bd470b',NULL,NULL

This procedure can take a long time if there is a lot of data still associated with the connector. Be Patient.


Now – I need to mention – there is also a very interesting community tool – which was written to manage product connectors.  I have not used it – but it is worth a look:
http://www.systemcentercentral.com/Downloads/DownloadsDetails/tabid/144/IndexID/12581/Default.aspx

ConfigMgr Client Troubleshooter


ConfigMgr Client Troubleshooter - ещё одна утилита, показывающая состояние клиента SCCM. Кроме информации о состоянии сервисов и просмотра логов, можно получить информацию об операционной системе, установить клиента, обновить политики, перезапустить назначенные объявления и т.д.
Скачать утилиту и получить полную информацию о ней можно в дневнике разработчика.

Merry christmas!!!


Dec 21, 2009

Changing Domains and/or Domain Names with SMS 2003 and SCCM 2007


A question that seems to come up a lot around here is from people who already have existing SMS 2003 servers and they want to either change the domain name or move the server to a new domain.  Before we can truly address that question though, we must understand the different security modes available in each product because the security mode is what largely determines our answer.

In SMS 2003 there are two security modes: Standard Security mode and Advanced Security mode.  So what's the difference?  Standard Security mode uses user accounts to run services, configure computers and connect between computers, whereas Advanced Security mode relies on Active Directory.
In System Center Configuration Manager (SCCM 2007) we also have two modes but they work a little bit different.  The first is Mixed Mode which is analogous to Advanced Security mode in SMS 2003, and the second is Native Mode which takes the level of security even higher by integrating with a public key infrastructure (PKI) to help protect communication by using certificates.
Now unless you already know the answer to our original question about changing domains you're probably wondering what all this has to do with anything.  The answer to that is because you can change domains in some modes but not others.  If you're running SMS 2003 in Standard Security mode then yes, you can change domains.  If you're running SMS 2003 in Advanced Security mode then no, you cannot change domains.  So where does SCCM 2007 fit in all of this?  Well, considering that with SCCM 2007 security starts with Mixed Mode (which is basically SMS 2003 Advanced Security mode) then that would tell us that changing domains in SCCM2007 is not supported at all.
Here's a chart that should help make this a little more clear:
image
I can't imagine too many folks running SMS 2003 in Standard Security mode these days so discarding that, what if you have to change the domain name?  Unfortunately, if you find yourself in this scenario your only real recourse is a removal and reinstallation of the site.  Not ideal, I know, but that's the reality of the situation so you'll want to take careful consideration of this when initially planning your hierarchy.

How to link multiple Gateway Servers together?

Overview:

In Operations Manager, the Gateway server role is primary used for monitoring servers outside the Root Management Servers trusted Domain boundary.  Another popular  use of the Gateway role is for performance improvements by placing gateways in sites with poor network connectivity.  Sometimes it is necessary to “chain” multiple gateways together to get monitor across multiple untrusted boundaries.
For Example, say you have a scenario that looks like the following:
image
Here we have a management group installed in the “My Company” network.  The Admin has the requirement to monitoring the machines in the “DMZ” network.  There is no direct connection between the “My Company” network and the “DMZ” network without first going through the “ExtraNet” network.


How to setup chained Gateways?

To minimize the chance of configuring things wrong, install 1 gateway at a time starting with the one reporting directly to an existing MS or the RMS, and moving “out.”  Verify newly installed gateways are properly communicating, downloading MPs, etc before attempting to install the next one in the chain.
The actual install steps are identical for any gateway, whether reporting to an MS/RMS or chained to another gateway.
1. On the RMS, run the GatewayApprovalTool
a. /ManagementServerName:
b. /GatewayName:
2. Install the gateway bits on the new machine
3. If needed, configure certificates to establish trust between the new gateway and the server it reports to

Here is the tricky part: Configuring certificates between two gateways is no different than configuring certificates between an agent and a gateway, or a gateway and a MS or RMS.  The same certificate settings are required, the same tools are used to request, install, and import the certs.  A healthservice can only load and use a single auth certificate, so in the chained scenario the same certificate will be used by the gateway to authenticate to its parent and to any children.  The parent and child(ren) must both trust the Certificate Authority which issued the gateway’s cert.

Supported Configurations:

Simple:

image
Here the RMS loads a cert from CA1 and is configured to trust CA1 as its root certificate authority.  GW1 loads a cert issued from CA1 and trusts CA1.  GW2 loads a cert issued from CA1 and trusts CA1.

Complex:

image
Here the RMS loads a cert from CA1 and is configured to trust CA1 as its root certificate authority.  GW1 loads a cert issued from CA1 and trusts CA1 and CA2.  GW2 loads a cert issued from CA2 and trusts CA1.

Unsupported Configuration:

image
Here the RMS loads a cert from CA1 and is configured to trust CA1 as its root certificate authority.  GW1 loads a cert issued from CA1 and CA2 and trusts CA1 and CA2.  GW2 loads a cert issued from CA2 and trusts CA2.  This is not possible because GW1 cannot load more than one certificate in it’s health service communication channel.

FAQs:

Q1. In the supported complex configuration, doesn’t GW2 also need to trust CA2?
A1.  No, since GW1 presents a cert from CA1, and this is the only cert GW2 needs to trust.  GW2 never needs to verify the trust of a CA2 certificate (HealthService only check settings of the cert it loads, not that it comes from a trusted CA) so it doesn’t NEED to have that CA trust cert.  A machine only needs to trust the incoming certificate from parent or child, it does not need to trust the one it has loaded.
Hope this helps!  I will update FAQs as I get more questions.
Technical content was provided and tested by Lincoln Atkinson.  Thanks Lincoln!!
Rob Kuehfus | System Center Operations Manager | Setup and Deployment Program Manager
This is supplied "as -is" with no support. In addition, my thoughts and opinions often change, and as a weblog is intended to provide a semi-permanent point in time snapshot you should not consider out of date posts to reflect my current thoughts and opinions.

Cross Platform PowerShell Scripts Released


We are happy to release several cross platform-specific PowerShell scripts to help automate the discovery of UNIX/Linux servers, installation and upgrade of the cross platform agents for Operations Manager 2007 R2, signing of the certificates, and changing the Management Server managing the UNIX/Linux server.
Four PowerShell scripts are currently available:
  • ChangeUnixIsManagedBy.ps1
  • DiscoverUnixAgent.ps1
  • InstallUnixAgent.ps1
  • UpgradeUnixAgent.ps1 
Each of these scripts is covered in this article, and help for each of these scripts is available by running the script without any parameters.


These four scripts leverage several helper scripts, which do not perform any actions on their own (and return no output if run on their own):
  • ChangeUnixIsManagedByImpl.ps1
  • DiscoverUnixAgentImpl.ps1
  • InstallUnixAgentImpl.ps1
  • scx.ps1
  • UpgradeUnixAgentImpl.ps1
The .zip file attached contains the files noted above.
Note: All of these scripts require PowerShell to be installed and are run from a Windows computer (not from a UNIX/Linux client computer).

ChangeUnixIsManagedBy.ps1 The ChangeUnixIsManagedBy script is used to change the current management server monitoring a UNIX/Linux server (or group of UNIX/Linux servers) to a new server as specified in the command line parameters.
The script accepts a list of Unix/Linux hosts on the input pipe.  These hosts are represented either as strings with fully qualified domain names (FQDN) or as objects with a "ComputerName" string property with the FQDN.
The output of the script is a list of objects with a "ComputerName" string property with the FQDN of the Unix/Linux host and a "Status" property with the status of the operation for the current host.
The following parameters can be used with the ChangeUnixIsManagedBy script:
Parameter Definition
RootManagementServer Name of OpsMgr root management server to use or empty string to use current computer (default)
ManagementServer Management Server to assign (required)
Target Additional computer to change management server for (done before any hosts are piped into script)

Usage:
ChangeUnixIsManagedBy.ps1-RootManagmentServer: -ManagementServer:
-Target:

Examples:
You have a Linux computer named SLES10-1.contoso.com and you want to change its Management Server to ContosoMS2.contoso.com, so you would use the following command:
ChangeUnixIsManagedBy.ps1 -ManagementServer:ContosoMS2.contoso.com
-Target:SLES10-1.contoso.com
Note: The script can also be executed without the parameter names, but only if all the parameters are provided. For example:
ChangeUnixIsManagedBy.ps1 ContosoRMS.contoso.com ContosoMS2.contoso.com
SLES10-1.contoso.com

InstallUnixAgent.ps1
The InstallUnixAgent script is used to deploy and install the Operations Manager Cross Platform Agent on the UNIX/Linux server(s). Running this script will first deploy the agent to the specified UNIX/Linux server and then install it. The script accepts a list of UNIX/Linux hosts on the input pipe. The hosts are represented either as strings with fully qualified domain names (FQDN) or as objects with a "ComputerName" string property with the FQDN.
The output of the script is a list of objects with a "ComputerName" string property added/changed to contain the FQDN of the Unix/Linux host and a "Status" property added/changed to contain the status of the operation for the current host. All status strings except 'OK' are an error message.
The following parameters can be used with the InstallUnixAgent script:
Parameter Definition
Port Port to connect to at remote host (default is 22)
RootManagementServer Name of OpsMgr root management server to use, or empty string to use current computer (default)
Username User name to use for connecting to remote host (required)
Password Password for specified username (required)
PackageName Name of package to install (required, name of package file)
PackagePath Full path (on the Operations Manager server) to package being installed (required, name of folder with package file)
Distro Distribution or OS name (required, one of AIX, HPUX, Solaris, RHEL or SLES)
Version OS version of the remote host (required, one of 11iv2, 11iv3, 4, 5, 5.3, 6.1, 8, 9, 10, 11)
Architecture OS architecture of the remote host (required, one of Powerpc, IA64, PARISC, SPARC, x86 or x64)
Target Additional remote computer targeted for install (done before any hosts piped into script)

Usage:
InstallUnixAgent.ps1-Port: -RootManagmentServer:
-Username: -Password: -PackageName:
-PackagePath: -Distro: -Version:
-Architecture: -Target:
Example:
To deploy and install the x86 version of the SUSE Linux Enterprise Server 10 (SLES) Agent to the computer named SLES10-1.contoso.com, the following would be used:
InstallUnixAgent.ps1 -Username:root -Password:password -PackageName:scx-1.0.4-248.sles.10.x86.rpm -PackagePath:”C:\Program Files\System Center Operations Manager 2007\AgentManagement\UnixAgents” -Distro:SLES –Version:10 -Architecture:x86
-Target:SLES10-1.contoso.com
Note: The script can also be executed without the parameter names, but all the parameters must be provided.
UpgradeUnixAgent.ps1 The UpgradeUnixAgent script is used to upgrade an existing Operations Manager Cross Platform Agent on a UNIX/Linux server.  This is done by first deploying the updated agent to the remote UNIX/Linux server and then installing it (upgrading).
Usage:
UpgradeUnixAgent.ps1-Port: -RootManagmentServer:
-Username: -Password: -PackageName:
-PackagePath: -Distro: -Version:
-Architecture: -Target:
Example:
To upgrade an existing SLES agent to a new version (e.g., version 1.0.4-252), you would use the following command:
UpgradeUnixAgent.ps1 -Username:root -Password:password -PackageName: scx-1.0.4-252.sles.10.x86.rpm -PackagePath:”C:\Program Files\System Center Operations Manager 2007\AgentManagement\UnixAgents” –Distro:SLES –Version:10 –Architecture:x86
–Target: SLES10-1.contoso.com
DiscoverUnixAgent.ps1
The DiscoverUnixAgent script is used to sign the certificate used for communication and discovering an instance of the UNIX/Linux Server into Operations Manager.
The script accepts a list of UNIX/Linux hosts on the input pipe. The hosts are represented either as strings with fully qualified domain names (FQDN) or as objects with a "ComputerName" string property with the FQDN.
The output of the script is a list of objects with a "ComputerName" string property added/changed to contain the FQDN of the Unix/Linux host and a "Status" property added/changed to contain the status of the operation for the current host. All status strings except 'OK' are an error message.
The following parameters can be used with the DiscoverUnixAgent script:
Parameter Definition
Port Port to connect to at remote host (default is 22)
Server Name of OpsMgr server to use or empty string to use current computer (default)
RootManagementServer Name of OpsMgr root management server to use or empty string to use current computer (default)
Username User name to use to connect to remote host (required)
Password Password to use to connect to remote host (required)
Distro Distribution or OS name (required). (see below for valid combinations of operating systems, versions and architectures)
Version OS version of the remote host (required). (see below for valid combinations of operating systems, versions and architectures)
Architecture OS architecture of the remote host (required) (see below for valid combinations of operating systems, versions and architectures) 
Target Additional remote computer to discover (done before any hosts piped into script)

Valid combinations of operating systems, versions and architectures:
OS Valid versions Valid Architectures
AIX 5.3 | 6.1 Powerpc
HPUX 11iv2 | 11iv3 PARISC
Solaris 8 | 9 | 10 SPARC
  10 x86
(ver > 120012-14)
RHEL 4 | 5 x86 | x64
SLES 9 x86 | x64
10 | 11 x86 | x64
      
 
Usage:
DiscoverUnixAgent.ps1-Port: -Server:
-RootManagmentServer: -Username: -Password:
-Distro: -Version: -Target:
Example:
To sign the certificate and discover the x86 version of the SUSE Linux Enterprise Server 10 (SLES) Agent to the SLES 10 server named SLES10-1.contoso.com, the following would be used:
InstallUnixAgent.ps1 -Username:root -Password:password -Distro:SLES -Version:10
-Target: SLES10-1.contoso.com
As before, the script can also be executed without the parameter names, however, in these instances all the parameters must be provided.

Chaining the PowerShell Scripts
The Cross Platform PowerShell scripts are designed so that they can be chained together. For example, you may want to deploy, install, sign, and discover a single UNIX/Linux server (or a group of UNIX/Linux servers) by using a single command. This is done the same way other PowerShell scripts are chained together - by piping the output of one command into the input of the next.
However, in these instances, we want to make sure that the output from one command is only piped into the next if the action being performed was successful. For example, if the initial agent installation fails, we don’t want to attempt to discover the server into Operations Manager. This can be done in a couple of ways.
Examples:
This command passes the output from the first script into the second script, but only if the first one was successful:
Type File_of_hosts | InstallUnixAgent.ps1 | Where { $_.Status -eq "OK"} | DiscoverUnixAgent.ps1
This command passes the output from the first script into the second script, but only if the first one was successful. Otherwise, it will write out an error status:
Type File_of_hosts | InstallUnixAgent.ps1  | foreach { if ($_.Status -eq "OK")
{ Write-Output $_ } else { Write-Error $_.Status } } | DiscoverUnixAgent.ps1
This example adds another piped command to return a list of the UNIX/Linux Servers against which the scripts were run and the outcome of each:
Type File_of_hosts | InstallUnixAgent.ps1  | foreach { if ($_.Status -eq "OK")
{ Write-Output $_ } else { Write-Error $_.Status } } | DiscoverUnixAgent.ps1 | Foreach { Write-Host $_.ComputerName $_.Status }