Jan 12, 2011

SCCM Tips and Tricks

Important Teems used in Software update point process:
Deployment : An object that is used to deploy software updates to clients in the target collection.Deployment objects are replicated to child sites where they are read only.
deployment package: An object that hosts software update source files.Deployment objects are replicated to child sites where they are read only.
Deployment template: A template that stores many of the deployment properties that might not change from deployment to deployment and that are used to save time and ensure consistency when you create deployments.
Network Access Protection: A policy refreshment platform that allows you to better protect network assets by enforcing compliance with system health requirement.configuration manager 2007 NAP lets you include software updates in your health requirements.
search folder: A folder that provides an easy way to retrieve a set of software updates that meet the defined search criteria.
software update: Composed of two main parts: The metadata and software update file.The metadata is the information about each software update and is stored in site server database.The software update file is what client computers download and run to install the software update.
software update file: The file that the client computer downloads ,such as an executable(.exe) or windows installer (.msi) file,and then installs to update a component or application.
software update metadata: Data that provides the information about software update, such as name,description,products that the update supports,update classification,article ID,download URL,applicability rules and so on.
Update list: A fixed set of software updates that can be used for delegated administration and creating software update deployments.There are also several reports that provide information about update list.

 What SysPrep does :
the computer should be in workgroup because of Unique SID and other unique
ID’s.If the computer is in domain (mean duplicating the SID’s) and if you try to  capture it (sysrep will try to join the computer to workgroup) and deploy ,the  destination computers will have the same GUID.Sysprep assigns a unique security  ID (SID) to each destination computer the first time the computer is restarted.  For more informaiton on sysprep read here.
sysprep Not only remove SID, it also provides the  following functions:
  • Removes the computer name; whereas a unique SID might not be required in  some environments, unique computer names are certainly essential
  • Removes the computer from the Windows domain; this is necessary because the  computer has to be added to Active Directory with its new name
  • Uninstalls plug and play device drivers, which reduces the risk of hardware compatibility problems; required drivers will be installed automatically on the target machines
  • Can remove event logs (reseal parameter); this is useful if you have to  troubleshoot a target machine
  • Deletes restore points; if you have to use system restore on the target machine, you could run into problems if you use a restore point from the master PC
  • Removes the local administrator’s profile and disables the account; this ensures that you don’t accidentally copy your files to the target machines and leave the admin account unprotected
  • Ensures that the target computer boots to Audit mode, allowing you to install third-party applications and device drivers
  • Ensures that mini-setup starts after booting up the first time, allowing you  to configure the target computer’s new name and other configurations
  • Allows you to reset the grace period for Windows product activation (rearm)  up to three times; this gives you more time to activate target computers.
Below listing are some of the Pro’s and Con’s of SCCM Sites Primary /secondary) configuration :
So the Pro’s are:
  1. Secondary sites do not require additional Configuration Manager 2007 server licenses.
  2. Secondary sites do not require an additional SQL Server database at the secondary site.
  3. Clients can be managed across a slow network connection link, such as a wide area network (WAN) connection between sites, without the need to configure client agent settings.
  4. Secondary sites can have management points (called proxy management points) to help prevent client reporting information, such as inventory reports and status messages, from traversing slow network connections to the primary site.
  5. Remote sites can be managed centrally from a parent primary site without the need for an on-site administrator at the secondary site.
Cons:
  1. Parent sites for secondary sites cannot be changed without uninstalling them and installing a new secondary site.
  2. Secondary sites cannot be upgraded to primary sites. To replace a secondary site with a primary site, you must uninstall the secondary site and install a primary site.
  3. Because Configuration Manager clients are always assigned to primary sites, client agent settings cannot be configured differently from the secondary site’s parent site for clients located within the boundaries of secondary sites.
Pro’s:
  1. Reduces site hierarchy complexity.
  2. Allows package to be copied out of band to a distribution point within the site.
  3. Does not require a server operating system. (limited to 10 connections)
  4. Provides on-demand package distribution, in which packages are downloaded to the branch distribution point only when specifically requested by a client computer.
  5. Branch distribution points download content from standard distribution points using BITS (Background Intelligent Transfer Service).
  6. Supports all packages, including software update packages and operating system deployment packages.
Cons :
  1. Does not manage traffic uploaded from clients to management points.
  2. Does not manage traffic when downloading policies from management points to clients.
  3. Does not provide a local software update point to scan for software updates.
  4. Does not provide precise time and bandwidth controls between sites, as a Sender does.
  5. Restricts available connections to 10 or fewer if using a client operating system.
When choosing between primary sites, secondary sites, and branch distribution points, you should consider the amount of network traffic that the planned and future site clients will generate. It might be beneficial to install a secondary site if the amount of network traffic generated by clients across a slow link would be greater than the site-to-site communication traffic generated by a secondary site. Clients generate uncompressed network traffic when they request policies and send information—such as inventory, discovery, and status message information—to their management point based on the policy polling interval and client agents settings you define in the primary site’s Configuration Manager console. Site-to-site communication between primary and secondary sites is compressed and can be scheduled and throttled by configuring site address settings. For more info look at here
Brief Information about How Hardware Inventory is processed when client sends to MP to process it:
When the client runs the hinv agent, it starts and sends the inventory information(can be found in inventory.log in c:\windows\system32\ccm\logs folder) to the MP server which can be found in hinv.log file.You can identify this based on the GUID or computer name. Once you see it has sent inventory information to the MP successfully, there is no problem with the client.
Next Look into the MP_Hinv log which is present on your MP server (D:\SMS_CCM\Logs ,this contains MP_DDR (full DDR for clients),MP_SINV,MP_location etc ) .You can see that, it has been processed .xml file( like Hinv Sax: loading D:\SMS\mp\outboxes\hinv.box\HinvAttachmentADDITX5X.xml).
Once MP receives the file, the file is moved from the MP outbox to authenticated dataldr.box and that the file name changes. this information can be found in mpfdm.log.
Finally open dataldr.log, Notice that the file is moved into the dataldr.box\process directory and then it is renamed to X??????????.mif
some more information from  dataldr.log
Processing Inventory for Machine: XPCLIENT01   Version 1.8  Generated: 09/24/2010 12:51:34
Begin transaction: Machine=XPCLIENT01(GUID:5054FAE8-C9EB-4CEE-8C0D-1E742BA7C93A)
Commit transaction: Machine=XPCLIENT01(GUID:5054FAE8-C9EB-4CEE-8C0D-1E742BA7C93A)
Done: Machine=XPCLIENT01(GUID:5054FAE8-C9EB-4CEE-8C0D-1E742BA7C93A) code=0 (8 stored procs in XH6S6I9DX.MIF)
No more machine MIFs to be processed, terminating thread
If you see any mif files in inboxes\dataldr.box\badmifs,there is something wrong with the client information while processing to Database.
Here are some of the points which i have been observed with the logs:
  1. In SCCM, the client component settings will be saved in D:\SMS\inboxes\clicfg.src inbox folder which will be replicated to CAP_Sitecode\clicomp folder .If you look into this folder can see, remctrl.cfg, hinv.cfg,sinv.cfg and other settings information. This client component settings will be saved in sms\inboxes\clifiles.src\clcmpdir.ini file.so inboxmgr.log file will keep the information about if there are changes in clifiles\hinv\ folder. Just tried copying new test document. In few sec ,it has been copied to CAP_P04\clifiles.box\hinv .So every 5 sec ,inboxmgr.log will logs the information about if there are any changes in hinv folder.
  2. When you enable the client agent settings, these will saved in clicfg.src inbox folder and command line for these settings will be under D:\SMS\inboxes\clicomp.src .for ex: hinv ,the command line used is CommandLine=i386\inhinv32.exe /s
  3. When the client sends its hinv to the MP, it goes to SMS\MP\Outboxes\hinv ,once it is processed there ,it will be moved to sms\inboxes\auth\dataldr.box\ folder ,it site server has any issues in updating this data In SMS DB, it starts giving alerts under component status.
About Site control file:
Configuration data is gathered from default settings installed with SMS, changes made by SMS administrators who make site configuration changes, and changes made by SMS service and thread components. When site configuration changes are made, SMS updates the site control file and the registry where configuration changes are stored.
Since most SMS services function on a schedule, after an administrator or SMS turns on a service or thread component, the component checks the site control file for its configuration. This file was created based on the original settings during the SMS site installation. SITECTRL.CT0 contains the current settings for the Site Properties which is duplicated in the SMS SQL database.
The information contained in the SMS SQL database is the information viewed in the SMS Administrator Site Properties. The SMS Administrator queries the SQL database for the information. Whenever you make a change in the SMS Administrator to the Site Properties the SMS Hierarchy Manager service creates a temporary configuration file in the SMS\SITE.SRV\SITECFG.BOX directory with a CT1 extension.
This file contains the new configuration based on the selections you have picked. When the SMS Site Configuration Manager service scans this directory and sees a *.CT1 file, it picks the file up and overwrites the SITECTRL.CT0 file with the new configuration. Then it creates a *.CT2 file that is picked up by the SMS Hierarchy Manager service which updates the SMS SQL database with the new information.
The CT1 file is generally considered the PROPOSED file and the CT2 file is considered the ACTUAL Site Control files. These files are deleted once the property change process is complete. The CT0 file is the Master Site Control file. Logically
1. SMS_HIERARCHY_MANAGER creates the CT1 file.
2. SMS_SITE_CONFIG_MANAGER overwrites the CT0 file, deletes the CT1 file, and creates the CT2 file.
3. SMS_HIERARCY_MANAGER updates the SMS SQL database with the CT2 file and deletes it.
site control file gives you SMS site hierarchy properties like site server name ,site code,site name ,if there are any odd an packs(like OSD,Mobile device ,version, security mode
It contains the properties of all component manager like SMS_Discovery_data_manager,SMS_site_Hierarchy_manager etc with description of SMS ID’s like 10007,10009 etc)
About SMS Provider:
The SMS Provider is a WMI provider that allows both read and write access to the Configuration Manager 2007 site database. The SMS Provider is used by the Configuration Manager console, Resource Explorer, tools, and custom scripts used by Configuration Manager 2007 administrators to access site information stored in the site database. The SMS Provider also helps ensure that Configuration Manager 2007 object security is enforced by only returning site information that the user account running the Configuration Manager console is authorized to view.
The SMS Provider can be installed on the site database server computer, site server computer or another server class third computer during Configuration Manager 2007 Setup. After setup has completed, the current installed location of the SMS Provider is displayed on the site properties general tab.
If the SMS Provider computer is offline, all Configuration Manager 2007 consoles for the site will not function.
Delete Aged Discovery Data:
Delete aged discovery will delete any client for which it hasn’t received any ddr within the configured timeframe regardless of what discovery method generated the DDR.
Delete inactive client discovery data:
Delete inactive client discovery data will delete any clients marked as inactive for the period configured. Clients can become marked as inactive for 2 reasons.
1) The client is marked as obsolete
2) By the client status reporting feature in ConfigMgr 2007 R2. If you haven’t implemented this clients only become inactive when they are obsolete.
This task isn’t just looking at heartbeat ddr’s as you stated, it looks at the inactive bit set or not. Now the lack of a heartbeat discovery ddr is one of the things that could mark a client inactive if you implemented the client status reporting feature, as could the lack of software and hardware inventory or the lack of requests to a management point for a machine policy.
Delete obsolete client discovery data:
Delete obsolete client discovery data works similar to delete inactive but works on the obsolete bit as opposed to the inactive bit.
Clients are marked obsolete if they are determined to be a new record for an already existing client, and the records can’t be merged.
So as I stated in the beginning running AD system group discovery has no impact on clients being marked active or obsolete and hence will not influence their corresponding maintenance tasks.
Some more info about When client becomes Obsolete or Inactive etc:
- Resources are only marked Obsolete if another resource is created with the same HW ID
- A resource deleted by the Delete Aged Discovery Data task will be recreated by AD Discovery if the object still exists in AD.
- A resource will be marked Inactive it it is marked obsolete (this usually doesn’t matter though because the delete obsolete time is usually less than the delete inactive time)
- A resource will be only marked Inactive by R2 client health, if it is newly discovered, or is obsolete. Looking back at previous answers of my own (on this and other forums), I’ve stated that a lack of heartbeat will also cause a resource to be marked inactive. Based upon the documentation which I’ve just reviewed, I don’t think that this is true.
Difference Between Refresh DP and Update DP :
1) Update distribution points increments the package version, goes to the source location, constructs new package content but only sends the delta between what is already present on the DP and what is currently in the new package source. Also this action is package specific and once you trigger this action all the DPs to which the package has been distributed will get the new version.
2) Refresh distribution points does not increment the package version but simply sends out the current version of the package content again to a specific DP. So this is action is specific to a package-DP assocation and should be used when the content on any particular DP appears corrupted.
Impact of enabling BDR:
1) For Update Distribution Points: Consider the scenario where one (or potentially several) files in the package source has been updated/modified. Enabling BDR would trigger distribution manager to do a diff between the current version of the file and the new version of the file and only send the delta changes within the file. On the receiving side we will then perform a BDR merge for that delta. So in the BDR case we may end up sending lesser data than in the scenario where BDR is not enabled on the package.
2) For Refresh Distribution Points: BDR setting has no affect – we simply send the entire current version of the package
Depreciated Tools in Win PE Version 2.0:
Intlcfg.exe
Peimg.exe
Pkgmgr.exe
PostReflect.exe
VSP1CLN.exe
Update the registry changes to the computer without restart or log off and log in:
When you do any registry changes,you may need to log off and log in to apply these changes to the computer .Instead you can run the following command without log off.
You will have to run this command as Administrator (of course the computer will not allow to change any registry changes if you are normal user)
RUNDLL32.EXE user32.dll,UpdatePerUserSystemParameters ,1
How to Clone a VDI/VHD File if you receive an Error when Using the used VDI File:
Today,i have created a VDI file using Virtual Box with Server 2008 Operating System ,this can be used for all my Lab purposes.So using the existing one,i made a DC and next is to create SCCM.So started creating New VM for SCCM using Existing VDI but it doesnt work out since,the UUID is already in use with domain error is like similar to below one “cannot register the hard disk with UUID in Virtual Box”.
This case is not like in Virtual PC,since you can use the same VHD as many times as you like.
So to resolve this,you will need to clone the VDI with Vbox manage tool which comes along with Virtual Box,You dont need to download it again.
Open the CMD prompt and change the directory to C:\Program Files\Oracle\VirtualBox
Now type vboxmanage clonehd “E:\Lab\SCCM R2\SCCM.vdi”  “E:\Lab\SCCM R2\SCCMR2.vdi”
Where SCCM.vdi is My Exisitng VDI and SCCMR2 is My new VDI file.
alt
VBscript Tips:
VB Script Command to pipe the data to file:
How to append data to files in VB script:
Set fso=createobject(“scripting.filesystemobject”)
Set objtextfile=fso.opentextfile(“eskon.txt” ,2 ,true)
’2 is for Writing the data and 1 is Read and 8 is to append data
Objtextfile.writeline(“This is simple file” & thisisavariable)
Open Text File Method (How to Read the Text file to read all the computer information):
set fso=createobject(“scripting.filesystemObject”)
Set objinputfile=fso.opentextfile(“eskon.txt”,1, true)
do while objinputfile.AtEndOfline <> then
strcomputer-objinputfile.Readline

Loop
Below is the Example for listing the disk space avilable with Parition names for given list of computer in text file:
Set fso=CreateObject(“scripting.filesystemobject”)
Set objinputfile=fso.OpenTextFile(“eswar.txt”,1,true)
Set objoutputfile=fso.OpenTextFile(“raju.txt”,2,true)
Const HARD_DISK = 3
Do While objinputfile.AtEndOfLine <> True
strcomputer=objinputfile.ReadLine
Set objWMIService = GetObject(“winmgmts:\\” & strComputer)
Set colDisks = objWMIService.ExecQuery _
(“Select * from Win32_LogicalDisk Where DriveType = ” & HARD_DISK & “”)
objoutputfile.WriteLine (strcomputer)
For Each objDisk in colDisks
objoutputfile.WriteLine(“DeviceID: “& vbTab &  objDisk.DeviceID)
objoutputfile.WriteLine(“Free Disk Space: “& vbTab & objDisk.FreeSpace)
Next
Loop
Sending an Email with list of information from the script output commands:
Set objEmail=createobject(“CDO.Message”)
objEmail.From=eskon@eskonr.com
objemail.To=eskon@eskonr.com
objEmail.Subject=”info about script output!”
objEmail.Textbody=objoutputfile.Readall
objEmail.configuration.fields.Item(“http://schemas.microsoft.com/cdo/configuration/sendusing”)=2
objEmail.configuration.fields.item(“http://schemas.microsoft.com/cdo/configuration/smtpserver”)=”ch.eskonr.com”
objEmail.configuration.fields.item(“http://schemas.microsoft.com/cdo/configuration/smtpserverport”)=25
objemail.configuration.fields.update
objemail.send

Dec 30, 2010

Installing SCOM 2007 R2 on a SQL 2008 Instance with all Windows Firewalls Enabled.

I decided I needed to re-install my lab environment.  I wanted to keep all of the firewalls on during the install process and only open the ports that are actually needed. I installed SQL using a named instance as many customers use a SQL 2008 cluster.

After I installed the SCOM database on the SQL 2008 server with all firewalls on.  I created a firewall rule to let port 1433 allow connections.  As specified in the Supported Configurations doc
Root management server 1433 —> OperationsManager database
1
I also setup a firewall rule to allow port 1434 back to the RMS server from the SQL Instance Server. (Also in the guide)


Root management server 1434 UDP < — OperationsManager database
2
I start the install of SCOM to the RMS server.  I unchecked Database as my database is already install the on the SQL instance.
3
I typed in my SC Database Instance Name and clicked Next
4
But I got this error “Setup cannot location the SC database”
5
So I enabled firewall logging to see what was getting dropped blocked by setting the firewall to log dropped packets.
6

In the SCOM setup I clicked back and then next.
I checked the firewall logs in %systemroot%\system32\Logfiles\Firewall\pfirewall.log  and it looks like UDP port 1434 is being dropped

date time action protocol src-ip dst-ip src-port dst-port size path
12/26/2010 16:56:54 DROP UDP 192.168.2.63 192.168.2.61 58321 1434 38 RECEIVE

I create another rule on the SQL server to enable UDP port 1434
8

In the SCOM setup I click back and next again.

Once again same failure.  “Setup cannot location the SC database”
Back to the firewall logs.  It now needs TCP port 62756 (Not in the guide)
date time action protocol src-ip dst-ip src-port dst-port size path
12/26/2010 17:12:03 DROP TCP 192.168.2.63 192.168.2.61 50503 62756 38 RECEIVE


I create another rule on the SQL server to enable TCP port 62756
7
After that rule is enabled I am able to continue on a install SCOM successfully with all of the windows firewalls still on.

Installing Opalis 6.3 on Windows 2008 R2

As many of you know Opalis is one of the latest join to the System Center family. Btw the family is growing, take a look at the AVICode solution, if you need to monitor managed code applications and web services this is a must have, it has always been a great product but really expensive, now it’s included in SC Suite licensing and is affordable for everyone.
Let’s return to Opalis, the latest release 6.3, eventually adds support for Windows 2008 R2 and publishes the Integration Packs (IPs) for the whole System Center family (OM, CM, SM, VMM, DPM). This has been a great opportunity to add this technology to my System Center Lab, since the whole setup process hasn’t been so easy I want to share with you dos and donts for installing Opalis on Windows 2008 R2.
A foreword: I’m not a java fan, I don’t like it as a programming language, I don’t believe in the write once run everywhere mantra.  This means that, regarding the JBoss/Java stuff I’m not an expert at all, I will just explain the way I set up my environment, I do not claim this is the only way nor the best one. So it won’t be early enough when Microsoft will move away from JBoss and the LogiXML LGX Report stuff.
A second foreword: Opalis is a great solution, if you haven’t had the chance to take a look at it I strongly recommend you do so. It empowers IT people making easy to design and maintain complex automations without requiring programming skills.
Let’s start with my donts:


  • don’t try to push the whole environment to Windows 2008 R2, only the management server and the action servers with the System Center Integration packs are supported on Windows 2008 R2
  • don’t follow the Technet documentation for securing the Operator Console is you’re using an internal CA, I will explain the entire process later
  • I wasn’t able to install the LGX reporting stuff on Windows 2008 R2, after a trial and error session I gave up when the authentication process refused to work with a method not found error (System.Security.Principal.WindowsIdentity.GetRoles). I would suggest to skip LGX reporting waiting for a SQL Reporting Server solution from Microsoft or install the reporting on Windows 2003 (sigh) where it worked at the first attempt.
And my dos:
  • turn UAC off, the setup should work if run with administrative privileges, but until I turned off UAC I had all sort of errors.
  • only use JBoss 32bit, the whole Opalis dlls are 32 bit so don’t even try to install JBoss x64 (as I first did)
  • use Windows 2003 SP2 (sigh) for non System Center Integration Packs, this means you need to have at least two systems: the management server, database server and action server for SC IPs on Windows 2008 R2; one or more Windows 2003 server for other IPs. Things should improve during 2011, let’s see.
  • you will probably want to run your JBoss process as a service, I used this tool with success on Windows 2008 R2: http://labs.jboss.com/jbossweb/downloads/jboss-native-2-0-9.html. Since the tool is designed for an updated JBoss version I would advice, just for clarity and not for functionality, to modify the bat file with the JBoss version used with Opalis
    REM
    REM VERSION, VERSION_MAJOR and VERSION_MINOR are populated
    REM during the build with ant filter.
    REM
    set SVCNAME=JBAS42SVC
    set SVCDISP=JBoss Application Server 4.2
    set SVCDESC=JBoss Application Server 4.2.3 GA/Platform: Windows x64
    set NOPAUSE=Y
  • Add the JAVA_HOME environment variable to the System variables
  • add the %JAVA_HOME%\Bin path to the System PATH environment variable

Pre-setup steps – Windows 2008 R2

Create the Opalis service account and remember to add it to the local administrators’ group on the Management Server and on planned action servers and clients
Create the Opalis database before running setup, the setup procedure doesn’t give you the chance to configure the DB in terms of size, options and so on. So I would advice to create the DB before running setup, to turn off autogrow and probably to put it in simple recovery mode. Don’t forget to add the Opalis service account as a dbo for the newly created DB.
Always install .net framework 3.5 if you plan to use the System Center Integration Pack, it’s a prerequisite documented in the release notes but you could miss it (as I did). Technically you need the .net framework only on the action servers that are supposed to run the IPs and on the Client used to edit the policies, I would recommend to install it on the management server as well.

Opalis Operator Console

To install the Operator Console follow the Technet instruction, remember to install the 32bit version of JBoss and once installed (copied) remember to splistream Service Pack 1 into it (copy SP1 files into JBoss installation directory). Once you took your time to download all the prerequisites in a sort of treasure hunt, just run the powershell script to setup the Operator Console. This entire process should be smooth and it worked as expected in my case.
To run JBoss as a service see my dos topic, this is something you want to do in a production environment.
To secure the console, again something you want to do since the console is authenticated (the basic way) and users are required to type in their username and password in clear, you can follow the Technet documentation if you’re going to use a public CA, but if you want to use an internal CA you have to perform the following steps. (I copy the relevant part of the technet page and modified the checklist when needed)
At the end of the checklist you will have added a certificate called Opalis (alias) enrolled from an internal CA in its own datastore (opalis). I assume the internal PKI has a standard architecture with a secured root CA and a sub CA used for enrollment.
To generate and prepare a certificate store for the Opalis SSL certificate (alias=Opalis)
At the command prompt, type
  1. "%JAVA_HOME%\bin\keytool" -genkey -alias Opalis -keyalg RSA -keystore "%JAVA_HOME%\jre\lib\security\opalis
  2. At the prompts, provide the following information:
    1. Keystore password. In a default JDK installation the password is changeit if you plan to change the password (good idea) remember to spend the new password anywhere you’ll find "changeit".
    2. First and Last name. Type the fully qualified domain name of the Operator console host computer. This is the only relevant information
    3. Organizational unit
    4. Organization
    5. City
    6. State or Province
    7. Two-letter country code
  3. When prompted for the Alias password, leave it blank and press ENTER. (this way it is identical to the keystore password)
    The certificate is added to the JAVA Opalis certificate store. I prefer to have a separate store, easier to maintain and backup.
To generate a certification authority request file
  1. Type the following command:  "%JAVA_HOME%\bin\keytool" -certreq -alias Opalis -keyalg RSA -keystore "%JAVA_HOME%\jre\lib\security\opalis" -file opalis.csr
  2. You will also be asked for the keystore password. In a default installation of the JDK and in our example the password is changeit.
  3. Submit the opalis.csr file to the certification authority.
Submitting the certification request to a Microsoft internal CA.
Logon the web enrollment page for the CA (the following screenshots refer to a Windows 2003 based CA but the same applies to a Windows 2008 based one)
image
Choose advanced certificate request
image
Submit a certificate request … file
image
Copy and paste the content of the csr file you generated opalis.csr (it’s a text file you can open with notepad, you must copy the entire content)
image
download the certificate and let’s call it opalis.cer.
From the same web site, download the root CA certificate in DER format and the Sub CA certificate in DER format, you’ll need them to use the SSL certificate. Let’s assume you named the three certificates rootca.cer, subca.cer, opalis.cer (the latter is the SSL certificate).
Importing the certificate into Java store and enabling the Operator Console
  1. When you receive the certificate from the certification authority, import it using the following commands:
    "%JAVA_HOME%\bin\keytool" -import -Alias RootCA -keystore "%JAVA_HOME%\jre\lib\security\opalis" -trustcacerts -file rootca.cer
    "%JAVA_HOME%\bin\keytool" -import -Alias SubCA -keystore "%JAVA_HOME%\jre\lib\security\opalis" -trustcacerts -file subca.cer
    "%JAVA_HOME%\bin\keytool" -import -alias Opalis -keystore "%JAVA_HOME%\jre\lib\security\opalis" -file opalis.cer
    The certificate is added to the JAVA cacert certificate store.
Next step: Enable Operator console access using HTTPS
To enable Operator Console access using the HTTPS protocol
  1. Open the \server\default\deploy\jboss-web.deployer\server.xml file.
Uncomment the HTTPS protocol information in the server.xml file. The resulting file should look similar to:


address="${jboss.bind.address}"
maxThreads="250" maxHttpHeaderSize="8192"
emptySessionPath="true" protocol="HTTP/1.1"
enableLookups="false" redirectPort="8443" acceptCount="100"
connectionTimeout="20000" disableUploadTimeout="true" />


address="${jboss.bind.address}"
protocol="HTTP/1.1"
SSLEnabled="true"
maxThreads="250"
scheme="https" secure="true"
clientAuth="false"
keystoreAlias="Opalis"
keystoreFile="${java.home}/lib/security/opalis"
keystorePass="changeit"
sslProtocol="TLS"     />
  1. Replace for each protocol with the actual port numbers you will use. The default port number for the Operator console is 5314. The default port number for https is 8443.
  2. To turn off a protocol, comment out the connection string of the protocol that you want to block using after the string. Turning off a protocol means that users cannot access the Operator console using that protocol.
  3. Copy the server folder from \offline\protocol\https to .
  4. Modify the application.xml file located at \server\default\deploy\OpsConsoleApp-1.0.ear\ME TA-INF\application.xml by changing
    OpConsoleWebService-1.0.jar to OpConsoleWebServiceSSL-1.0.jar.
  5. Modify the security-constraint section of the \server\default\deploy\OpConsoleWebServiceBridge-1.0.war\WEB-INF\web.xml file to the following:


    SecuredAll
    /*

    CONFIDENTIAL


  6. Restart JBoss to load the new server.xml settings.
- Daniele
This posting is provided "AS IS" with no warranties, and confers no rights.

Happy New Year

Dec 21, 2010

Great New Reporting Documentation for SCOM Published

http://technet.microsoft.com/en-us/library/gg508710.aspx
Covers the following:
  • Custom Reporting Overview
  • Setting Up the Environment
  • Creating Custom Reports
  • Data Warehouse Schema
  • Inside a Generic Report
  • Custom Report Queries

Test a remote SQL Connection very quickly and easy

I ran across a very useful MSDN blog post today and thought I’d share it. It explained a very quick and easy way to test a SQL connection and verify if you can logon with a specified account.
Here’s how:


Go to any folder on the system from where you want to test the connection, and create a new file














 Then change the extension to .UDL






 When you open that up you can easily see any SQL server within your reach







 



And then easily test the connection for a specified account (is the case the Windows Integrated) by just opening the database list, or pressing Test Connection




I think this is a very usefull way to quickly check a connection without having to configure or install anything.