5. Getting Out of the Database – Mastering Oracle Scheduler in Oracle 11g Databases

Chapter 5. Getting Out of the Database

In the previous chapters, we have mainly focused on the existing dbms_scheduler functionality—things that are not new in Oracle 11g. In this chapter, this is going to change. Here, we will get our hands on the most important addition to the Scheduler—the remote job agent. This is a whole new kind of process, which allows us to run jobs on machines that do not have a running database. However, they must have Oracle Scheduler Agent installed, as this agent is responsible for executing the remote job. This gives us a lot of extra power and also solves the process owner's problem that exists in classical local external jobs. In classical local external jobs, the process owner is by default nobody and is controlled by $ORACLE_HOME/rdbms/admin/externaljob.ora. This creates problems in installation, where the software is shared between multiple databases because it is not possible to separate the processes. In this chapter, we will start by installing the software, and then see how we can make good use of it. After this, you will want to get rid of the classical local external jobs as soon as possible because you will want to embrace all the improvements in the remote job agent over the old job type.

Security

Anything that runs on our database server can cause havoc to our databases. No matter what happens, we want to be sure that our databases cannot be harmed. As we have no control over the contents of scripts that can be called from the database, it seems logical not to have these scripts run by the same operating system user who also owns the Oracle database files and processes. This is why, by default, Oracle chose the user nobody as the default user to run the classical local external jobs. This can be adjusted by editing the contents of $ORACLE_HOME/rdbms/admin/externaljob.ora.

On systems where more databases are using the same $ORACLE_HOME directory, this automatically means that all the databases run their external jobs using the same operating system account. This is not very flexible. Luckily for us, Oracle has changed this in the 11g release where remote external jobs are introduced. In this release, Oracle decoupled the job runner process and the database processes. The job runner process, that is the job agent, now runs as a remote process and is contacted using a host:port combination over TCP/IP.

The complete name for the agent is remote job agent, but this does not mean the job agent can be installed only remotely. It can be installed on the same machine where the database runs, and where it can easily replace the old-fashioned remote jobs. As the communication is done by TCP/IP, this job agent process can be run using any account on the machine. Oracle has no recommendations for the account, but this could very well be nobody. The operating system user who runs the job agent does need some privileges in the $ORACLE_HOME directory of the remote job agent, namely, an execution privilege on $ORACLE_HOME/bin/* as well as read privileges on $ORACLE_HOME/lib/*. At the end of the day, the user has to be able to use the software. The remote job agent should also have the ability to write its administration (log) in a location that (by default) is in $ORACLE_HOME/data, but it can be configured to a different location by setting the EXECUTION_AGENT_DATA environment variable.

In 11g, Oracle also introduced a new object type called CREDENTIAL. We can create credentials using dbms_scheduler.create_credential. This allows us to administrate which operating system user is going to run our jobs in the database. This also allows us to have control over who can use this credential. To see which credentials are defined, we can use the *_SCHEDULER_CREDENTIAL views. We can grant access to a credential by granting execute privilege on the credential. This adds lots more control than we ever had in Oracle 10gR2. Currently, the Scheduler Agent can only use a username-password combination to authenticate against the operating system.

The jobs scheduled on the remote job agent will run using the account specified in the credential that we use in the job definition. Check the Creating job section to see how this works. This does introduce a small problem in maintenance. On many systems, customers are forced to use security policies such as password aging. When combining with credentials, this might cause a credential to become invalid.

Note

Any change in the password of a job runtime account needs to be reflected in the credential definition that uses the account.

As we get much more control over who executes a job, it is strongly recommend to use the new remote job agent in favor of the classical local external jobs, even locally. The classical external job type will soon become history.

A quick glimpse with a wireshark, a network sniffer, does not reveal the credentials in the clear text, so it looks like it's secure by default. However, the job results do pass in clear text. The agent and the database communicate using SSL and because of this, a certificate is installed in the ${EXECUTION_AGENT_DATA}/agent.key. You can check this certificate using Firefox. Just point your browser to the host:port where the Scheduler Agent is running and use Firefox to examine the certificate.

Note

There is a bug in 11.1.0.6 that generates a certificate with an expiration date of 90 days past the agent's registration date. In such a case, you will start receiving certificate validation errors when trying to launch a job. Stopping the agent can solve this. Just remove the agent.key and re-register the agent with the database.

The registration will be explained shortly.

Installation on Windows

We need to get the software before the installation can take place. The Scheduler Agent can be found on the Transparent Gateways disk, which can be downloaded from Oracle technet at http://www.oracle.com/technology/software/products/database/index.html.

There's no direct link to this software, so find a platform of your choice and click on See All to get the complete list of database software products for that platform. Then download the Oracle Database Gateways CD. Unzip the installation CD, and then navigate to the setup program found in the toplevel folder and start it.

The following screenshot shows the download directory where you run the setup file:

After running the setup, the following Welcome screen will appear. The installation process is simple.

Click on the Next button to continue to the product selection screen.

Select Oracle Scheduler Agent 11.1.0.6.0 and click on the Next button to continue. Enter Name and Path for ORACLE_HOME (we can keep the default values).

Now click on Next to reach the screen where we can choose a port on which the database can contact the agent.

I chose 15021. On Unix systems, pick a port above 1023 because the lower ports require root privileges to open. The port should be unused and easily memorizable, and should not be used by the database's listener process. If possible, keep all the remote job agents registered to the same database and the same port. Also, don't forget to open the firewall for that port. Hitting the Next button brings us to the following Summary screen:

We click on the Install button to complete the installation. If everything goes as expected, the End of Installation screen pops up as follows:

Click on the Exit button and confirm the exit.

We can find Oracle Execution Agent in the services control panel. Make sure it is running when you want to use the agent to run jobs.

Installation on Linux

Having downloaded and unzipped the installer, navigate to the gateways directory and start the installer with ./runInstaller. Now we will get the same screens as we did for the Windows installation. The only difference, apart from the slashes pointing in the right direction, is the pop up asking to run root.sh at the end of the installation. Running root.sh sets setuid root on the $ORACLE_HOME/bin/jssu executable to the setuid root.

Run root.sh as requested and finish the installation. The root.sh only sets setuid root on the $ORACLE_HOME/bin/jssu executable using the following code:

# jssu must be setuid and owned by root
if [ -f $ORACLE_HOME/bin/jssu ]; then
$CHOWN root $ORACLE_HOME/bin/jssu
$CHMOD 4750 $ORACLE_HOME/bin/jssu
fi

Put a schagent:/data/app/oracle/product/schagent/11.1.0.6:N tag in /etc/oratab and use oraenv to set the environment for the agent as follows:

In the listing mentioned in the screenshot above, it is clearly visible that jssu has the setuid root privileges. This also means that the root.sh has been successfully run. The setuid root privilege connotes that the executable can use the privileges of the owner of this executable, which is root in this case.

Note

For Linux, Mac OS X, and AIX, the default location for oratab is /etc/; and for Solaris, it is /var/opt/oracle/. Check the platform-dependent installation manual for the default location on your platform.

Upgrading the remote job agent

It's a good idea to apply any new updates published by Oracle. If you are still using the 11.1.0.6 agent, you will see some problems that are fixed by the 11.1.0.7 upgrade. The upgrade of the agent is not very difficult once you know that the same patch set should be applied as that for the database server installation. This is not very well documented, but this is how it works—start the installer found in the patch set, and then select the agent's ORACLE_HOME to upgrade.

The upgrade process is quick and easy till the last part—the execution of the root shell. In the 11.1.0.7 patch, you are asked to run the root.sh. Run it as root and check the properties of ${ORACLE_HOME}/bin/jssu. This file should have the setuid root privileges to function correctly. The root.sh that we are requested to run contains the following:

#!/bin/sh
/data/app/oracle/schagent/11.1.0.7.0/install/root_schagent.sh

Note

The root_schagent.sh file contains an error that prevents the setting of the setuid root on the jssu file. The root_schagent.sh file begins with this:

#!/bin/sh

ORACLE_HOME=%ORACLE_HOME%

This is fine for a Windows system, but not for a Unix or Linux system. In these systems, we expect the following:

#!/bin/sh

ORACLE_HOME=$ORACLE_HOME

Change the root_schagent.sh file accordingly. Make sure that ORACLE_HOME is defined and run the root.sh again. Now the setuid root bits should be in place.

Silent install of the remote job agent

When you decide to use the remote job agent, there will be a lot of installation work. We can use the interactive installer as described before, but this can also be done in a much faster way by using the silent install option. This will work best when the machines on which you are installing have uniform installation. First, decide what the ORACLE_HOME_NAME and ORACLE_HOME locations will be. This is the standard for all silent Oracle installations. For the agent, we also need HOST_NAME and PORT where the agent will listen at the time of installation. We can easily change that later, as most of us prefer to do. Here we will see the installation of the base release followed by the patch.

Base release

For the base release, use the same Gateways CD that we used in the interactive installation. Just change the current directory to the gateways directory where you can see the runInstaller executable. The base release is 11.1.0.6. We will immediately upgrade it to the latest possible release, which (at the time of writing this) is 11.1.0.7; so we are going to put the base release in the 11.1.0.7.0 directory. If you are not planning to do this upgrade, it would make more sense to install the base release in a 11.1.0.6.0 directory with an adjusted ORACLE_HOME_NAME of the schagent111060.

export s_HOST_NAME=pantzer.lan
export s_PORT=15021
./runInstaller -silent -responseFile $PWD/response/schagent.rsp 
ORACLE_HOME_NAME=schagent111070 
ORACLE_HOME=/data/app/oracle/schagent/11.1.0.7.0 
s_nameForDBAGrp=dba n_configurationOption=3 FROM_LOCATION="$PWD/stage/products.xml" 
s_hostName=${s_HOST_NAME} s_port=${s_PORT}

When the installer is ready, run the root.sh script as is usually done after an installation. After this, check that jssu has the setuid root privileges.

ls -l /data/app/oracle/schagent/11.1.0.7.0/bin/jssu

Patch to the latest available level

The Scheduler is improving rapidly, so it is preferable to perform the upgrades at regular intervals. We use the same source of installation for the agent as we do for the database software upgrade. Again, start in the same working directory as that of the interactive installation where you can see the runInstaller executable.

Since 11.1.0.7, Oracle asks during the installation whether (or not) you want to be notified by email when a new security patch is released. Normally, we would already have this notification in place. So during the installation, we will most likely decline this using:

./runInstaller -silent -responseFile $PWD/response/patchset.rsp
ORACLE_HOME_NAME=schagent111070
ORACLE_HOME=/data/app/oracle/schagent/11.1.0.7.0
DECLINE_SECURITY_UPDATES=true

This starts the installer and performs the upgrade of the previously installed ORACLE_HOME. There is a problem in the script that is called from the root.sh, which is generated for us. We should fix this using the following before running the root.sh script:

cat /data/app/oracle/schagent/11.1.0.7.0/install/root_schagent.sh |sed "s/%ORACLE_HOME%/$ORACLE_HOME/" >/tmp/rs$$
cp -p /tmp/rs$$ /data/app/oracle/schagent/11.1.0.7.0/install/root_schagent.sh
rm /tmp/rs$$

Now run the root.sh as we normally do. It is important that jssu has the setuid root privileges, so check them using:

ls -l /data/app/oracle/schagent/11.1.0.7.0/bin/jssu

This should look as follows:

-rwsr-x--- 1 root dba 29497 Sep 11 2008/data/app/oracle/schagent/11.1.0.7.0/bin/jssu

Preparing the database for remote agent usage

There are a few things to check and do before we can use the remote job agent. The following steps will be explained in detail:

  1. Verify that XDB is installed.

  2. Set HTTP port, if not set.

  3. Install the remote job agent objects.

  4. Set the registration password.

Verifying the XDB installation

In order to use a remote Scheduler, the database needs to have XML database installed.

Check this by verifying dba_registry for the existence and validity of the comP_id 'XDB' as follows:

In this case, the XML Database is installed and valid.

Setting the HTTP port

The agent uses the database's HTTP port to contact the database. Check which port is in use and set a port if needed. As with the agent, make sure that the port is above 1023, as the lower ports are reserved for the root. There is not much to say about which port is good or bad, but don't take the port that the remote job agent is listening on. As we poor humans have trouble remembering simple numbers, it might be convenient to pick 15021 for the agents and 16021 for the database.

Note

Other Oracle products, such as Application Express, use the database HTTP port. When setting this port, ensure that all of the uses are taken into account.

Check the port with this:

SELECT DBMS_XDB.gethttpport FROM dual;

If needed, set the port with this:

--/
begin
DBMS_XDB.sethttpport(16021);
end;
/

The agent can contact the database on port 16021 to make sure that it is not messed up by any firewalls. Now, create the database objects used by the Scheduler Agent.

Creating remote Scheduler objects

You can create the remote_scheduler_agent user in the database and a few lines of code using the following:

sqlplus '/ as sysdba' @?/rdbms/admin/prvtrsch.plb

This creates dbms_isched_remote_access in the SYS schema; and the restrict_access function, and the register_agent and submit_job_results procedures in the remote_scheduler_agent schema.

Setting registration password

We can set a password that the agent needs to use at the time of registration using the following:

--/
begin
DBMS_SCHEDULER.set_agent_registration_pass ('very_difficult_password');
end;
/

This password will only be used to register the agent. After the registration, an SSL certificate will be used to authenticate the agent.

Configuring a remote agent

The remote agent's configuration is simple. It has to be registered as a target for the database. The registration has to be done manually using the schagent utility found in $ORACLE_HOME/bin.

On Linux, I added a pseudo ORACLE_SID (schagent) to the oratab file (schagent:/data/app/oracle/product/schagent/11.1.0.6:N ). We can use that and oraenv to set the right environment variables for us—this was nice and easy.

Now that we know the registration password, we can use it to register the agent from the machine on which the agent works for us. There is no need to keep a record for this password. It is only used during the registration process and we can change it whenever we want. The following screenshot shows the registration of the agent:

On Windows, the screen output should look like this:

From now on, the agents should be useable.

Troubleshooting

In case of problems with the installation, check if the port is reachable with telnet to the host and port, for example telnet pantzer.lan 16021. This should give a clear screen as a response—definitely not "connection refused" (wrong machine/port or service is not running) or no response at all (firewall)..If you get the response "connection refused", check whether the agent is running or not. If the agent is not running, start it using schagent —start and try again. If the agent was running, check the port that the agent is using in the agent configuration file.

The agent configuration file is found in $ORACLE_HOME/schagent.conf and in my system contains the following:

# This is the configuration file for the Oracle Scheduler Agent.
#
# The agent will look first for schagent.conf in {EXECUTION_AGENT_DATA}
# and then in {EXECUTION_AGENT_HOME}. It will only use whichever is
# found first and throw an error if none is found.
#
# You can change these values to suit your environment.
# network port to listen on for requests (between 1 and 65536)
PORT=15021
# host name of the host the agent is running on. If this is not
# specified, the resolved name of the local address will be used
HOST_NAME = NLLRONROO.lan
# maximum number of jobs to run simultaneously (between 1 and 1000)
MAX_RUNNING_JOBS=5
# if this is set to TRUE, the agent will reject put_file requests
DISABLE_PUT_FILE=FALSE
# if this is set to TRUE, the agent will reject get_file requests
DISABLE_GET_FILE=FALSE
# if this is set to TRUE, the agent will reject job execution requests
DISABLE_JOB_EXECUTION=FALSE
# the agent will reject any attempt to use any user on this list.
# This list should be comma-separated and is case-insensitive.
DENY_USERS=root,administrator,guest
# if this list is not empty, the agent will only allow use of a user
# on this list. This list should be comma-separated and is # case-insensitive.
# ALLOW_USERS=
# types of logging to do. Zero or more of the following options:
# OVERVIEW, WARNINGS, ALL, ERROR_STACKS, MILLISECONDS
LOGGING_LEVEL=OVERVIEW,WARNINGS

The first thing that comes to my mind when a registration has been done is: "How can I check this?" Unfortunately, there appears to be no way to check the status of the agent's registration—not even in the database. It would be very convenient to have an Oracle view that gives an oversight of which agents are talking with the database. I filed an enhancement request (7462577) for this. So with a little luck, we can check the status of remote agents in the near future. What we can do is see if the agent is running on the host and check the log file as follows:

Or we can check a specific agent's process as follows:

Or we can check the logfile of the agent as follows:

Multiple agents on the same host

The remote agent is a very powerful tool and is easy to deploy. We can use a separate installation for every agent that we want to have running on a host. With a little modification of the $ORACLE_HOME/bin/schagent code, we can easily configure multiple agents on a single host using the same software tree.

The contents of $ORACLE_HOME/schagent.conf suggest that the location of this file and the runtime files can be controlled by the EXECUTION_AGENT_DATA environment variable. We need to have this ability when we want to run multiple agents on the same host using the same software tree. If you take a look at $ORACLE_HOME/bin/schagent, there will be a little surprise. The EXECUTION_AGENT_DATA variable is reset to NULL, making the variable useless. I hope this is a bug that can be fixed soon. For now, I entered a comment sign before the line that resets the EXECUTION_AGENT_DATA variable, which led to an enhancement request (7560494). There is a good chance that this will be fixed in the next release. However, it is safe to change the code in schagent from:

!/bin/sh
# set this if you change your Agent home
EXECUTION_AGENT_HOME="/data/app/oracle/product/schagent/11.1.0.6"
# set this to use a different data directory for the Agent
EXECUTION_AGENT_DATA=""

to:

!/bin/sh
# set this if you change your Agent home
EXECUTION_AGENT_HOME="/data/app/oracle/product/schagent/11.1.0.6"
# set this to use a different data directory for the Agent
# EXECUTION_AGENT_DATA=""

Sometimes an enhancement can be as simple as adding a comment sign as explained above. When using this approach, the runtime files for the agent will be created in the ${EXECUTION_AGENT_DATA} directory. This includes the agent.key file, which is the SSL certificate that the agent uses. The ${EXECUTION_AGENT_DATA} directory also becomes the data directory. Here, all the administrative and log files that the agent uses to run jobs will be stored. The logging of the jobs run by the agent will be in a log subdirectory. This subdirectory will grow because the agent is not performing any log cleaning at the moment. This is the case with the current release, which is 11.1.0.7. But this will surely improve in the future release of the agent. It seems logical to remove the log files from the agent at the same time when the log is purged from the job log tables in the originating database. The logging of the agent is in ${EXECUTION_AGENT_DATA}/agent.log.

In this file, you will find the log entries of the agent, including details such as an agent started a job, received a new job, started a new job, and returned a job result to the database.

When using multiple agents on the same host, it is a smart move to adjust the userpart of oraenv. It should also define and export the EXECUTION_AGENT_DATA. Also, it implies that each agent gets its own tag in oratab.

The end of the oraenv script could be like this:

#
# Install any "custom" code here
#
EXECUTION_AGENT_DATA=/your/location/${ORACLE_SID}
Export EXECUTION_AGENT_DATA
And the oratab entries:
schagent{ID1}:/your/agents/ORACLE_HOME:N
schagent{ID2}:/your/agents/ORACLE_HOME:N

These definitions ensure that the software is taken from the correct location, and having the autostart on N makes sure that the database start procedures ignore the schagent* entries when the system starts. However, you do need to create a startup procedure that starts schagents on your system.

Credentials

To be able run jobs on a remote agent, we have to add the credential_name attribute to the jobs specification. To do this, we must create one first. We can create credentials using DBMS_SCHEDULER.create_credential and remove them with DBMS_SCHEDULER.drop_credential. At the moment, it is not mandatory to use credentials for local external jobs, but Oracle advises us to use them instead of falling back on the (soon to be deprecated) defaults. Using credentials also has documentation benefits. The operating system account that is involved becomes much more obvious, so the advice is to go and start using credentials. I created a normal user jobs, with the password employed, who has to run the jobs.

The attributes for credentials are as follows:

Attributes

Description

username

The username to execute the job

password

Password to authenticate the user, which is stored as unreadable

comments

An optional comment. It can describe what the credential is intended for

database_role

Database role to use when logging in (either SYSDBA, or SYSOPER, or NULL)

windows_domain

The Windows domain to use when logging in

Run the following code connected as your job's owner to create a credential:

--/
BEGIN
DBMS_SCHEDULER.create_credential(
credential_name => 'jobs_cred',
username => 'jobs',
password => 'employed'
);
END;
/

The creator is the owner of the credentials and users can be given access to them using a grant execute:

grant execute on jobs_cred to marvin;

The defined credentials are shown at dba | all | user in the _scheduler_credentials views. With Windows, it is important to note that the user specified by the credential must have the logon as batch job security policy (same as the jobs user that was used by OEM in the good old days). This policy can be set using the local security policies dialog found in the administrative tools control panel. Forgetting to set this policy is the most common cause of the Scheduler to OS (and Oracle Enterprise Manager to OS) interaction issues. If for some reason an administrator does not want to grant this privilege, explain to him or her that this privilege is mandatory to be able to use the agent.

Creating job—targeting Unix

There is much similarity between the familiar local external jobs and the new remote external jobs. The main difference is the destination attribute that has to be used for remote external jobs. Here we simply specify the HOST_NAME:PORT combination that we used at the agent's installation time. Let's see what happens when we run the first script to see what is in the job's environment:

--/
BEGIN
DBMS_SCHEDULER.create_job(
job_name => 'env', job_type => 'EXECUTABLE', job_action => '/tmp/test.sh', auto_drop => FALSE, enabled => FALSE
);
DBMS_SCHEDULER.set_attribute('env', 'credential_name', 'JOBS_CRED2');
DBMS_SCHEDULER.set_attribute('env', 'destination', 'pantzer:15021');
DBMS_SCHEDULER.enable('env');
END;
/

As soon as we enable the job, it is executed. In the schedulerjob_run_details view, the ADDITIONAL_INFO column displays the filename where the agent stored the output for the job as shown in the following screenshot:

This job output can be found in the agent's $EXECUTION_AGENT_HOME/data/log/, ${EXECUTION_AGENT_DATA}/log/, or $ORACLE_HOME/data/log/ directory depending on how you configured this. We can go there and check this file, but it is more convenient to have the computer display this information for us. In order to get the output, Oracle has provided us with the get_file procedure in dbms_scheduler. In the ADDITIONAL_INFO column, we get EXTERNAL_LOG_ID="job_74892_1". And get_file expects job_74892_1 with _stdout or _stderr appended to it, depending on what we want to get. Too bad that Oracle did not offer the filename or whatever input get_file needed. Instead, we need to filter the file name from the ADDITIONAL_INFO column. The following query separates the file_name part from the ADDITIONAL_INFO column for the last execution of the job called ENV:

select * from (SELECT log_id, ADDITIONAL_INFO, REGEXP_SUBSTR(ADDITIONAL_INFO,'job[_0-9]*')
AS external_log_id
FROM user_scheduler_job_run_details
WHERE job_name = 'ENV'
ORDER BY log_id desc)
where rownum = 1;

In the get_file call, we need to specify a credential that specifies the same operating system account as the one that we used for the job. So why not just use the same credential? We are using it here:

--/
DECLARE
l_clob CLOB;
l_ADDITIONAL_INFO VARCHAR2(50);
l_external_log_id VARCHAR2(50);
BEGIN
SELECT ADDITIONAL_INFO, external_log_id
INTO l_ADDITIONAL_INFO, l_external_log_id
FROM (SELECT log_id, ADDITIONAL_INFO, REGEXP_SUBSTR(ADDITIONAL_INFO,'job[_0-9]*')
AS external_log_id
FROM user_scheduler_job_run_details
WHERE job_name = 'ENV'
ORDER BY log_id DESC)
WHERE ROWNUM = 1;
DBMS_OUTPUT.put_line('ADDITIONAL_INFO: ' || l_ADDITIONAL_INFO);
UNIXtargeting, as remote platformDBMS_OUTPUT.put_line('EXTERNAL_LOG_ID: ' || l_external_log_id);
DBMS_LOB.createtemporary(l_clob, FALSE);
DBMS_SCHEDULER.get_file
(
source_file => l_external_log_id ||'_stdout', credential_name => 'JOBS_CRED', file_contents => l_clob, source_host => 'pantzer:15021'
);
DBMS_OUTPUT.put_line('stdout:');
DBMS_OUTPUT.put_line(l_clob);
END;
/

This gives a quite surprising output:

The PWD variable shows the location from where the schagent was started. In this case, it is the home directory of the Oracle user; not the home directory of the user that was specified in the credential or the conventional external job—/ (root). My advice is not to use this directory as it is subject to change. After every release of Oracle, we see that less information is passed from the database to the environment of a job. When starting the schagent, the first issue is to change the default directory to the $EXECUTION_AGENT_DATA directory and then start the agent. This makes sure that you always start the agent using the same environment, and also prevents any unwanted mixing up of directories being used by different agents.

During the execution of the job, both stdout and the stderr files are created. The latter is deleted at the end of the job when it is empty. If you request the output file from an existing job, but did not generate the requested file, the returned output is empty.

When you request a job file from a job that did not run, Oracle returns ora-01031 : insufficient privileges. This may look strange, but there is a possibility that that you provided the wrong credential. When you specified the wrong credential, it could be possible that the operating system user (specified in the user credential) has no privileges to access the directory in which the log file is stored.

The agent also has a file in which it tries to administer the jobs it knows about. This file is log/job_idss and has enough information to tell us where the job came from, who the job was (both schema and runtime user), and what was executed.

Creating job—targeting Windows

On Windows, everything is almost the same; almost, but not quite. On Unix systems, we can run any script and get the desired results. On Windows, we have to call cmd.exe and have it call our script with the script's optional arguments. So the script has to be seen as the second argument for cmd. To get a similar test as for Linux, we have to use the following code:

--/
BEGIN
DBMS_SCHEDULER.create_job
(
job_name => 'env_windows', job_type => 'EXECUTABLE', number_of_arguments => 2, job_action => 'C:\windows\system32\cmd.exe',
auto_drop => FALSE,
enabled => FALSE
);
DBMS_SCHEDULER.set_job_argument_value('env_windows',1,'/c');
DBMS_SCHEDULER.set_job_argument_value('env_windows',2, 'd:\temp\test.cmd');
DBMS_SCHEDULER.set_attribute('env_windows', 'credential_name', 'jobs_cred');
DBMS_SCHEDULER.set_attribute('env_windows', 'destination', 'nllronroo.lan:15021');
DBMS_SCHEDULER.enable('env_windows');
END;
/

The job called env_windows uses cmd.exe that calls the d:\temp\text.cmd script. The agent used to run this code can be found on the nllronroo machine at port 15021.

It generates the following output:

C:\WINDOWS\system32>set
ComSpec=C:\WINDOWS\system32\cmd.exe
logProperty=false
OS=Windows_NT
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.JS;.WS
PROMPT=$P$G
SystemDrive=C:
SystemRoot=C:\WINDOWS

In this case, the test.cmd script was:

set

This is roughly the equivalent of the test.sh script:

env

Runtime observations of the remote Scheduler

As the remote Scheduler is running more or less independently from the database that sends its jobs to the agent, it is possible that the originating database goes offline while a long-running job is underway. The system is robust enough to survive this. There are several situations to distinguish such as:

  • The database goes offline, but then comes back online before the job finishes

  • The database goes offline and the job finishes before the database is online again

  • The job agent stops before the job ends

In the first situation, where the database is back online before the job finishes, the agent submits the job completion data as if nothing happened at all. Even in the database, the remote job keeps the RUNNING status and is handled as if nothing had happened.

In the second situation, the remote job agent tries to submit the job completion data to the database and this time it fails. The data is saved in pending completion data and the agent continues normal processing. After one hour, the agent tries to resubmit the data and keeps trying this every hour until it succeeds. In the period following the start of the database, the job keeps its RUNNING status till the agent succeeds in submitting the completion data. When the database submits a new job to the agent before the one-hour resubmission interval has passed, the agent does nothing more than start the new job. Here is a little room for improvement. The agent knows that the database is back online and can resubmit the job completion data as soon as it receives a new job from the same database. An enhancement request has been filed for this.

In the third situation, there is a real problem. The job continues its run till it completes. Only, the agent has no way to tell what happened to the job because the agent was down. As soon as the agent is started again, it submits a "stopped by SYS" signal to the database and marks the job as FAILED. I must admit that I don't know how this can be improved, other than that I expected the running job to be terminated when the agent is taken offline. Now it is happily doing its job and is getting the possibly incorrect FAILED flag. The point is that the agent has no way to tell what really happened.

Resource Manager

At this moment, it is not possible to use Resource Manager for remote external jobs. But this is going to change for sure. We can put the jobs in a job class that has a resource consumer group mapped on it. But for remote external jobs, these definitions are ignored. For regular jobs, we can find the job class that was used when the job ran in the *_scheduler_job_log views. For remote external jobs, that column contains a NULL value indicating that Resource Manager settings were not applied on them. An enhancement request has been filed for this. The Resource Manager integration for the remote Scheduler Agent needs at least the active session pool. This enables the database to limit the number of parallel jobs submitted to the agent. There is a very limited configuration in the schagent.conf file (MAX_RUNNING_JOBS). This file is not flexible enough and is invisible to the database administrator.

Summary

In this chapter, we have seen the most important change in Scheduler 11g—the remote external job. It is easy to set up and configure. But at the same time, it seems impossible to see what agent is configured against the database. At the moment, the Resource Manager support is missing.

In this chapter, we saw:

  • How robust the agent is

  • How easy it is to configure multiple agents on a single host using a single software installation

  • How to get remote log files from the agent to the originating database

  • How to target a Windows system as a remote platform

  • How to target a Unix/Linux system as a remote platform

  • How to check the port that the agent is using

  • How to check the certificate that the agent is using

In the next chapter, we are going to take a look at Scheduler's event system—how events are generated and how can we act on these events.